Jan 09 10:45:53 crc systemd[1]: Starting Kubernetes Kubelet... Jan 09 10:45:53 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.685121 4727 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688207 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688230 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688235 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688240 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688246 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688250 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688255 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688260 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688264 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688270 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688274 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688280 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688296 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688305 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688313 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688319 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688326 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688333 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688339 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688344 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688349 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688354 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688358 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688363 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688367 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688372 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688377 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688381 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688385 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688390 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688395 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688399 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688405 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688409 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688414 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688420 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688425 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688430 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688435 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688440 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688445 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688450 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688455 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688460 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688465 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688470 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688476 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688481 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688487 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688492 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688496 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688501 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688505 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688529 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688533 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688538 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688543 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688548 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688553 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688557 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688563 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688571 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688577 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688581 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688586 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688591 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688597 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688602 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688607 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688611 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688616 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688719 4727 flags.go:64] FLAG: --address="0.0.0.0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688729 4727 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688738 4727 flags.go:64] FLAG: --anonymous-auth="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688744 4727 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688753 4727 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688759 4727 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688765 4727 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688771 4727 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688777 4727 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688782 4727 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688788 4727 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688793 4727 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688799 4727 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688806 4727 flags.go:64] FLAG: --cgroup-root="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688811 4727 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688816 4727 flags.go:64] FLAG: --client-ca-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688821 4727 flags.go:64] FLAG: --cloud-config="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688826 4727 flags.go:64] FLAG: --cloud-provider="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688830 4727 flags.go:64] FLAG: --cluster-dns="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688837 4727 flags.go:64] FLAG: --cluster-domain="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688842 4727 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688847 4727 flags.go:64] FLAG: --config-dir="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688851 4727 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688857 4727 flags.go:64] FLAG: --container-log-max-files="5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688863 4727 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688869 4727 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688874 4727 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688880 4727 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688885 4727 flags.go:64] FLAG: --contention-profiling="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688890 4727 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688895 4727 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688900 4727 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688905 4727 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688912 4727 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688917 4727 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688922 4727 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688927 4727 flags.go:64] FLAG: --enable-load-reader="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688932 4727 flags.go:64] FLAG: --enable-server="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688938 4727 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688944 4727 flags.go:64] FLAG: --event-burst="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688949 4727 flags.go:64] FLAG: --event-qps="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688955 4727 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688960 4727 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688965 4727 flags.go:64] FLAG: --eviction-hard="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688971 4727 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688977 4727 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688981 4727 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688987 4727 flags.go:64] FLAG: --eviction-soft="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688992 4727 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688997 4727 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689003 4727 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689007 4727 flags.go:64] FLAG: --experimental-mounter-path="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689011 4727 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689016 4727 flags.go:64] FLAG: --fail-swap-on="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689020 4727 flags.go:64] FLAG: --feature-gates="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689025 4727 flags.go:64] FLAG: --file-check-frequency="20s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689030 4727 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689034 4727 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689039 4727 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689043 4727 flags.go:64] FLAG: --healthz-port="10248" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689050 4727 flags.go:64] FLAG: --help="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689054 4727 flags.go:64] FLAG: --hostname-override="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689059 4727 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689063 4727 flags.go:64] FLAG: --http-check-frequency="20s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689067 4727 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689071 4727 flags.go:64] FLAG: --image-credential-provider-config="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689076 4727 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689080 4727 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689084 4727 flags.go:64] FLAG: --image-service-endpoint="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689088 4727 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689093 4727 flags.go:64] FLAG: --kube-api-burst="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689097 4727 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689101 4727 flags.go:64] FLAG: --kube-api-qps="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689108 4727 flags.go:64] FLAG: --kube-reserved="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689112 4727 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689117 4727 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689121 4727 flags.go:64] FLAG: --kubelet-cgroups="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689126 4727 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689130 4727 flags.go:64] FLAG: --lock-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689134 4727 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689138 4727 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689142 4727 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689149 4727 flags.go:64] FLAG: --log-json-split-stream="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689154 4727 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689158 4727 flags.go:64] FLAG: --log-text-split-stream="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689163 4727 flags.go:64] FLAG: --logging-format="text" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689167 4727 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689172 4727 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689177 4727 flags.go:64] FLAG: --manifest-url="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689181 4727 flags.go:64] FLAG: --manifest-url-header="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689187 4727 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689192 4727 flags.go:64] FLAG: --max-open-files="1000000" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689199 4727 flags.go:64] FLAG: --max-pods="110" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689204 4727 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689209 4727 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689215 4727 flags.go:64] FLAG: --memory-manager-policy="None" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689219 4727 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689225 4727 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689230 4727 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689235 4727 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689247 4727 flags.go:64] FLAG: --node-status-max-images="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689252 4727 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689257 4727 flags.go:64] FLAG: --oom-score-adj="-999" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689262 4727 flags.go:64] FLAG: --pod-cidr="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689267 4727 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689275 4727 flags.go:64] FLAG: --pod-manifest-path="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689280 4727 flags.go:64] FLAG: --pod-max-pids="-1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689286 4727 flags.go:64] FLAG: --pods-per-core="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689291 4727 flags.go:64] FLAG: --port="10250" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689296 4727 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689301 4727 flags.go:64] FLAG: --provider-id="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689305 4727 flags.go:64] FLAG: --qos-reserved="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689310 4727 flags.go:64] FLAG: --read-only-port="10255" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689315 4727 flags.go:64] FLAG: --register-node="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689320 4727 flags.go:64] FLAG: --register-schedulable="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689325 4727 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689333 4727 flags.go:64] FLAG: --registry-burst="10" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689338 4727 flags.go:64] FLAG: --registry-qps="5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689342 4727 flags.go:64] FLAG: --reserved-cpus="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689347 4727 flags.go:64] FLAG: --reserved-memory="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689354 4727 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689360 4727 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689365 4727 flags.go:64] FLAG: --rotate-certificates="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689370 4727 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689376 4727 flags.go:64] FLAG: --runonce="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689381 4727 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689387 4727 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689392 4727 flags.go:64] FLAG: --seccomp-default="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689397 4727 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689402 4727 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689407 4727 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689412 4727 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689417 4727 flags.go:64] FLAG: --storage-driver-password="root" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689422 4727 flags.go:64] FLAG: --storage-driver-secure="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689427 4727 flags.go:64] FLAG: --storage-driver-table="stats" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689432 4727 flags.go:64] FLAG: --storage-driver-user="root" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689437 4727 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689442 4727 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689448 4727 flags.go:64] FLAG: --system-cgroups="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689453 4727 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689461 4727 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689466 4727 flags.go:64] FLAG: --tls-cert-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689471 4727 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689477 4727 flags.go:64] FLAG: --tls-min-version="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689482 4727 flags.go:64] FLAG: --tls-private-key-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689487 4727 flags.go:64] FLAG: --topology-manager-policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689492 4727 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689497 4727 flags.go:64] FLAG: --topology-manager-scope="container" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689501 4727 flags.go:64] FLAG: --v="2" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689528 4727 flags.go:64] FLAG: --version="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689535 4727 flags.go:64] FLAG: --vmodule="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689541 4727 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689547 4727 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689881 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689913 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689920 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689926 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689935 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689940 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689946 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689951 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689956 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689960 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689964 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689969 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689973 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689979 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689985 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689991 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689996 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690002 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690006 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690011 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690016 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690022 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690027 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690031 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690036 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690039 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690044 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690048 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690052 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690057 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690061 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690066 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690072 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690077 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690083 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690089 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690094 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690099 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690104 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690108 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690118 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690123 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690127 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690131 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690136 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690140 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690144 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690149 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690153 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690158 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690163 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690167 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690172 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690176 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690180 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690184 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690189 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690193 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690197 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690201 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690205 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690209 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690213 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690217 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690222 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690227 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690232 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690236 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690241 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690245 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690249 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.690289 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.703297 4727 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.703351 4727 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703470 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703485 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703496 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703505 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703541 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703553 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703563 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703572 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703580 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703588 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703598 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703611 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703622 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703631 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703640 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703650 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703658 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703667 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703675 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703683 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703690 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703698 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703706 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703713 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703721 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703729 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703737 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703744 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703752 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703760 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703768 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703778 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703786 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703794 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703802 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703810 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703817 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703825 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703833 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703841 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703851 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703859 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703867 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703878 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703887 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703896 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703904 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703912 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703922 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703931 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703940 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703949 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703957 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703965 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703973 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703982 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703990 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703997 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704005 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704013 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704020 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704028 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704035 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704043 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704051 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704059 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704067 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704074 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704082 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704090 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704098 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.704111 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704349 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704366 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704376 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704384 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704394 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704406 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704415 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704425 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704433 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704444 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704454 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704464 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704475 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704486 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704496 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704536 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704546 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704555 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704566 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704576 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704585 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704595 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704606 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704616 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704626 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704637 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704646 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704656 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704666 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704676 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704686 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704695 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704705 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704715 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704725 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704735 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704745 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704754 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704763 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704773 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704784 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704796 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704808 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704820 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704832 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704842 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704854 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704866 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704876 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704887 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704897 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704909 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704917 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704924 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704934 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704943 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704954 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704964 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704974 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704988 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705001 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705012 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705024 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705034 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705045 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705054 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705064 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705075 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705084 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705094 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705104 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.705119 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.705438 4727 server.go:940] "Client rotation is on, will bootstrap in background" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.714391 4727 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.714613 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.715588 4727 server.go:997] "Starting client certificate rotation" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.715619 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.716135 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 20:33:49.931372023 +0000 UTC Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.716288 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.724558 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.726684 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.727572 4727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.739440 4727 log.go:25] "Validated CRI v1 runtime API" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.757264 4727 log.go:25] "Validated CRI v1 image API" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.759411 4727 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.762960 4727 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-09-10-41-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.763012 4727 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780252 4727 manager.go:217] Machine: {Timestamp:2026-01-09 10:45:54.778563188 +0000 UTC m=+0.228468019 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a4360e9d-d030-43eb-b040-259eb77bd39d BootID:efb1b54a-bec3-40af-877b-b80c0cec5403 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1b:7d:89 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1b:7d:89 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:88:e7:65 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:39:23:73 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:43:ca Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:74:2a:16 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:38:50:2a:51:7a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:9e:fc:33:33:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780610 4727 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780907 4727 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782217 4727 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782710 4727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782785 4727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.783560 4727 topology_manager.go:138] "Creating topology manager with none policy" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.783628 4727 container_manager_linux.go:303] "Creating device plugin manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.784116 4727 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.784550 4727 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.785185 4727 state_mem.go:36] "Initialized new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.785362 4727 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786906 4727 kubelet.go:418] "Attempting to sync node with API server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786946 4727 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786992 4727 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.787018 4727 kubelet.go:324] "Adding apiserver pod source" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.787038 4727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.789438 4727 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.789670 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.789779 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.789826 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.790002 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.790038 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.791362 4727 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792293 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792343 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792359 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792374 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792398 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792414 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792429 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792452 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792468 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792484 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792503 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792546 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792873 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793466 4727 server.go:1280] "Started kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793989 4727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793990 4727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.794379 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.794832 4727 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 09 10:45:54 crc systemd[1]: Started Kubernetes Kubelet. Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.795619 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18890a35c624357a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,LastTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796639 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796710 4727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796973 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:40:39.511893293 +0000 UTC Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797074 4727 server.go:460] "Adding debug handlers to kubelet server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797446 4727 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797478 4727 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797728 4727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.797892 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.799669 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.799770 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.797364 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.803767 4727 factory.go:55] Registering systemd factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.804365 4727 factory.go:221] Registration of the systemd container factory successfully Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805673 4727 factory.go:153] Registering CRI-O factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805728 4727 factory.go:221] Registration of the crio container factory successfully Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805811 4727 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805839 4727 factory.go:103] Registering Raw factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805862 4727 manager.go:1196] Started watching for new ooms in manager Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.806975 4727 manager.go:319] Starting recovery of all containers Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811373 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811445 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811461 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811475 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811488 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811502 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811536 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811553 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811570 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811584 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811595 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811606 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811618 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811633 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811644 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811653 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811663 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811675 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811685 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811696 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811709 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811720 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811734 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811749 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811761 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811776 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811792 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811809 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811823 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811838 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811850 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811868 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811879 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811892 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811907 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811920 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811937 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811950 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811965 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811983 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811997 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812047 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812064 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812077 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812091 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812107 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812122 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812145 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812159 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812174 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812378 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812391 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812411 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812426 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812444 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812461 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812474 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812487 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812503 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812597 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812610 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812624 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812636 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812648 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812668 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812681 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812694 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812717 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812729 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812743 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812758 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812775 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812789 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812803 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812817 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812829 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812843 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812855 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812868 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812882 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812897 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815851 4727 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815905 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815926 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815940 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815953 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815966 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815986 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815999 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816012 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816024 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816036 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816049 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816063 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816076 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816090 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816102 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816113 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816126 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816138 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816151 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816166 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816180 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816195 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816208 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816265 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816280 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816294 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816309 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816323 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816338 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816351 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816365 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816379 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816394 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816415 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816429 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816443 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816456 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816469 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816482 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816496 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816526 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816542 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816554 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816566 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816579 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816593 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816605 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816616 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816628 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816642 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816655 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816674 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816687 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816707 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816720 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816734 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816746 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816759 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816774 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816788 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816801 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816814 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816825 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816838 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816849 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816861 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816873 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816888 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816900 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816914 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816927 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816939 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816953 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816967 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816982 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816996 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817010 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817024 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817039 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817052 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817066 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817078 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817092 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817108 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817122 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817140 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817153 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817168 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817182 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817198 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817211 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817225 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817237 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817251 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817264 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817277 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817291 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817307 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817320 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817335 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817352 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817368 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817381 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817394 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817408 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817423 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817436 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817449 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817463 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817479 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817493 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817532 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817545 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817558 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817573 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817587 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817600 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817622 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817637 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817652 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817665 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817679 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817693 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817707 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817723 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817737 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817749 4727 reconstruct.go:97] "Volume reconstruction finished" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817758 4727 reconciler.go:26] "Reconciler: start to sync state" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.832532 4727 manager.go:324] Recovery completed Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.845394 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850111 4727 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850147 4727 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850174 4727 state_mem.go:36] "Initialized new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.857108 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.858911 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.858959 4727 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.859001 4727 kubelet.go:2335] "Starting kubelet main sync loop" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.859049 4727 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.860733 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.860811 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.864980 4727 policy_none.go:49] "None policy: Start" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.866136 4727 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.866177 4727 state_mem.go:35] "Initializing new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.900067 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912021 4727 manager.go:334] "Starting Device Plugin manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912626 4727 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912650 4727 server.go:79] "Starting device plugin registration server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913085 4727 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913302 4727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913479 4727 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913664 4727 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913701 4727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.922559 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.960144 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.960357 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.961928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.961970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962227 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962537 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963811 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963855 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963815 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965855 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965909 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965932 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967193 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967449 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967556 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968278 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968338 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.999404 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.014255 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015841 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.016448 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019133 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019299 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019392 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019441 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.120947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121064 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121159 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121207 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121302 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121329 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121342 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121340 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121619 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.217018 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218873 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.219959 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.301698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.316634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.325918 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.342666 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401 WatchSource:0}: Error finding container 8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401: Status 404 returned error can't find the container with id 8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401 Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.348643 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4 WatchSource:0}: Error finding container 871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4: Status 404 returned error can't find the container with id 871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4 Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.350871 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8 WatchSource:0}: Error finding container 26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8: Status 404 returned error can't find the container with id 26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.360066 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.368459 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.394527 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb WatchSource:0}: Error finding container 9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb: Status 404 returned error can't find the container with id 9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.399930 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.620279 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622147 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.622631 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.749123 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.749207 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.785869 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18890a35c624357a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,LastTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.795650 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.797805 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:29:38.658333949 +0000 UTC Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.864480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.864618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866101 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866174 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866218 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"238f0bffda992ac4f0ab43ed575c6762427e33280c6c9900c98b77c6791dcaec"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866366 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868542 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae5ff1a01059e577d8aa9eca11df8a4d2d3d74cdfbb0fdb58acaa154cae9e013" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae5ff1a01059e577d8aa9eca11df8a4d2d3d74cdfbb0fdb58acaa154cae9e013"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868697 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869024 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870438 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870587 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871695 4727 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871806 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.948324 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.948402 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.201854 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 09 10:45:56 crc kubenswrapper[4727]: W0109 10:45:56.225477 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.225677 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: W0109 10:45:56.277246 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.277355 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.422848 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424750 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.798434 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:54:21.024252827 +0000 UTC Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.798536 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h8m24.22571963s for next certificate rotation Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.805689 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878824 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878994 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885636 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885710 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892301 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ee458da9a63c683c7e9c63e784f29b9752498c2430ccdceff10b1985783b0cd" exitCode=0 Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ee458da9a63c683c7e9c63e784f29b9752498c2430ccdceff10b1985783b0cd"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892492 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.894306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.894444 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.901996 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c"} Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.902100 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904858 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5956cdf046241221791e256787fb6607ebd743de5040a84ee17dd9e976c21cba" exitCode=0 Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5956cdf046241221791e256787fb6607ebd743de5040a84ee17dd9e976c21cba"} Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904957 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904997 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905021 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905069 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"840208c8cf6ade2126a2c30c797cb923af67a7e913daba30130f9a051f2a32e3"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8328c42516f23ed81dfa93bfedb532ce8ab4b5cb0d090f1010fa6715017faaa9"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2c74d5e83ddefdc953d5796d80f0b900e7c7cea7faa0bfbab4acd3cac387359"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910607 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19ffd327efb1695fd60992f7915bffc10705585158d64e224e66b7802c387a5f"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.919951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"643f923f7389af30733922fbb5054b81c61914e8aceef8ae1f7b74e1a5b88ac3"} Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.920050 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.920165 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.922783 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.992569 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.667330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.925580 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.210598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.210789 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.616014 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.616191 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.711955 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.927952 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.927966 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.132982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.133155 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.416107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.416354 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:04 crc kubenswrapper[4727]: E0109 10:46:04.922732 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 10:46:05 crc kubenswrapper[4727]: I0109 10:46:05.616201 4727 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:46:05 crc kubenswrapper[4727]: I0109 10:46:05.616334 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.219207 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.219300 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.422578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.422786 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:06 crc kubenswrapper[4727]: E0109 10:46:06.426648 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.720248 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.731004 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.797413 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 09 10:46:06 crc kubenswrapper[4727]: E0109 10:46:06.807397 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.938213 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.943300 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.290055 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.290147 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.715928 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]log ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]etcd ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-filter ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-controllers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/crd-informer-synced ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-system-namespaces-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 09 10:46:07 crc kubenswrapper[4727]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/bootstrap-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-aggregator-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-registration-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-discovery-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]autoregister-completion ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapi-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: livez check failed Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.716007 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.940873 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.026792 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028181 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.944298 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.940484 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.940613 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.989375 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.002914 4727 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.588837 4727 csr.go:261] certificate signing request csr-x4pgc is approved, waiting to be issued Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.600414 4727 csr.go:257] certificate signing request csr-x4pgc is issued Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.708888 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.709114 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.728675 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.953030 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.286443 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288318 4727 trace.go:236] Trace[46291964]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:59.243) (total time: 13044ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[46291964]: ---"Objects listed" error: 13044ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[46291964]: [13.044801529s] [13.044801529s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288364 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288456 4727 trace.go:236] Trace[23214324]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:57.913) (total time: 14374ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[23214324]: ---"Objects listed" error: 14374ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[23214324]: [14.374663778s] [14.374663778s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288481 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288905 4727 trace.go:236] Trace[510296964]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:58.133) (total time: 14155ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[510296964]: ---"Objects listed" error: 14155ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[510296964]: [14.155415621s] [14.155415621s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288921 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.290838 4727 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.291991 4727 trace.go:236] Trace[559012523]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:59.052) (total time: 13239ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[559012523]: ---"Objects listed" error: 13239ms (10:46:12.291) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[559012523]: [13.239870398s] [13.239870398s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.292012 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.601833 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-09 10:41:11 +0000 UTC, rotation deadline is 2026-09-29 11:01:02.189576932 +0000 UTC Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.601879 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6312h14m49.58770003s for next certificate rotation Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.621067 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.632454 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.941375 4727 apiserver.go:52] "Watching apiserver" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.948773 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.949199 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.949479 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.950937 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951147 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951147 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951264 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951253 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.952083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954388 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954451 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954395 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.959383 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.959598 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960018 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960593 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960779 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.969173 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.975850 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.978425 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.984973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.985191 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55978->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.985265 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55978->192.168.126.11:17697: read: connection reset by peer" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.998790 4727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.004886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.019943 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.034857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.035644 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042267 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042290 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042367 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042419 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042441 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042527 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042625 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042657 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042684 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042751 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042781 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042834 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042887 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042961 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042989 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043013 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043089 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043139 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043164 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043197 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043220 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043523 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043546 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043570 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043571 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043682 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044213 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044261 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044347 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044526 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044679 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.044819 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.544796197 +0000 UTC m=+18.994700978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046034 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046071 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046194 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046232 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046250 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046329 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046346 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046380 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046449 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046812 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046829 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046847 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046889 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046904 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046970 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047052 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047070 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047085 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047099 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047136 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047293 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047369 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047393 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047445 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047463 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047481 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047499 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047549 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047567 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047585 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047622 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047650 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047800 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047846 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047885 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047965 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047993 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048077 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048114 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048139 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048229 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048282 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048304 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048355 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048382 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048423 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048451 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048475 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048543 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048575 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048644 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048668 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048740 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048918 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048939 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049018 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049041 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049087 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049195 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049253 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049319 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049337 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049396 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049418 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049435 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049453 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049492 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049527 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049548 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049568 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049606 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049911 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049958 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050150 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050261 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050276 4727 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050289 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050300 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050311 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050322 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050333 4727 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050344 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050355 4727 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050366 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050379 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050392 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050403 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050414 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050425 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050437 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050450 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050463 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050483 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050493 4727 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050524 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050535 4727 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050545 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050556 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050578 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050590 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050600 4727 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050611 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050625 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050635 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050645 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050655 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050667 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050678 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050687 4727 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050698 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050720 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.050792 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051805 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.051903 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.551849888 +0000 UTC m=+19.001754669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052777 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051045 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051061 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051470 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052900 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051577 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.053007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.053758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054889 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055227 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055521 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055654 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.056050 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057194 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057636 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058167 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058981 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059149 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060042 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060793 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060958 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.061376 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061400 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.061455 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.561433921 +0000 UTC m=+19.011338702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061735 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.062984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067723 4727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068467 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069035 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069420 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069425 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069569 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070693 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070736 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.076094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.076350 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076705 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076737 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076752 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076824 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.576800648 +0000 UTC m=+19.026705429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051025 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078947 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079488 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080352 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080785 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080797 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078618 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081203 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081541 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081124 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.082345 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.082402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083075 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083108 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083128 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083202 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.583177529 +0000 UTC m=+19.033082320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.083286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.083610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084496 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084922 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084971 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.085424 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.085830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086112 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086477 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087478 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.088067 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.089635 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.091110 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.091807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.092017 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093053 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093493 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093605 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094500 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095135 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095288 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095753 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096035 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096468 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096638 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096959 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097558 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097912 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.098524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.098841 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.104888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.104974 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.106775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.107327 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.113865 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.126067 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.127145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.128593 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.133144 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.133698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.138050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.149399 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152922 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152951 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152968 4727 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152985 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153059 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153072 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153085 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153097 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153110 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153123 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153136 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153150 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153164 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153175 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153185 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153196 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153210 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153223 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153236 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153252 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153266 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153281 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153294 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153307 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153320 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153411 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153430 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153446 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153463 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153477 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153493 4727 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153527 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153546 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153564 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153580 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153596 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153612 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153628 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153642 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153657 4727 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153670 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153686 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153701 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153715 4727 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153730 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153745 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153759 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153772 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153788 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153804 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153818 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153835 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153850 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153864 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153877 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153891 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153905 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153919 4727 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153933 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153947 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153961 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153975 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153990 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154004 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154018 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154034 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154048 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154062 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154075 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154090 4727 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154103 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154117 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154131 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154144 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154158 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154172 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154190 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154205 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154218 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154232 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154246 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154260 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154274 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154291 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154305 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154319 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154335 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154349 4727 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154365 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154383 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154401 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154417 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154436 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154450 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154462 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154476 4727 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154490 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154524 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154541 4727 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154555 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154569 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154582 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154595 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154607 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154621 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154633 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154646 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154657 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154671 4727 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154683 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154694 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154706 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154718 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154730 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154745 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154757 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154771 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154792 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154804 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154818 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154829 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154841 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154855 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154868 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154880 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154892 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154906 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154918 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154929 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154943 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154958 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154973 4727 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154986 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154999 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155012 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155024 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155037 4727 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155050 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155064 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155075 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155091 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155105 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155118 4727 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155132 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155145 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155159 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155175 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155189 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155201 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155216 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155230 4727 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155244 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155257 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155271 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155285 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155300 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155315 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.160902 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.174292 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.187789 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.273032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.281633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.290129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.560866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.560981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561022 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.560995607 +0000 UTC m=+20.010900388 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561098 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561162 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.561144112 +0000 UTC m=+20.011048983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662386 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662465 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662519667 +0000 UTC m=+20.112424448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662538 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662559 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662572 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662604 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662593639 +0000 UTC m=+20.112498420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662758 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662805 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662824 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662918 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662890447 +0000 UTC m=+20.112795258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.966812 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a54aae95b4a0d312469fe6ef388542dce7d6e3dad660e3d74aacc03dc9e16ac2"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970048 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"17aade6e432352668b3d4a0e36e7c1205d8e474dcc8a7f099521b96e7937d8fb"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.972265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.972352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e5b3d69113994016b4a5103d68234de99d20cbdb98841eb85df17dc12b939114"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.974112 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.976488 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" exitCode=255 Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.976570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.977559 4727 scope.go:117] "RemoveContainer" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.029861 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.071946 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.090451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.110842 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.126986 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.145297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.162010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.169387 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qlpv5"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.169829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171732 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171881 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171882 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.183951 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.198081 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.216055 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.231265 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.247471 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.267087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.267138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.269280 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.288008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.300268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.318206 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.329812 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.396443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.487355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.500436 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd335f7f5_7ede_4146_9ecc_f0718b547d43.slice/crio-661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2 WatchSource:0}: Error finding container 661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2: Status 404 returned error can't find the container with id 661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2 Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.551711 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hzdp7"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.552161 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7sgfm"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553679 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-57zpr"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553947 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.559072 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560070 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560233 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560342 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560782 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560875 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561149 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561213 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561332 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.562249 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.563533 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.570226 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.570325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570436 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570496 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.570473476 +0000 UTC m=+22.020378257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570815 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.570808036 +0000 UTC m=+22.020712817 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.576669 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.618193 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.649565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.667878 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670707 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670723 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670746 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670778 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670797 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670810 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670859 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.670843223 +0000 UTC m=+22.120748004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670969 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671189 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671201 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671210 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671329 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.671303846 +0000 UTC m=+22.121208827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.672143 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.672198 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.672185141 +0000 UTC m=+22.122090122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.687400 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.703758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.716822 4727 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717169 4727 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717216 4727 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717290 4727 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717333 4727 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717366 4727 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717402 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717433 4727 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.717418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c/status\": read tcp 38.102.83.200:45624->38.102.83.200:6443: use of closed network connection" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717589 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717677 4727 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717883 4727 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717685 4727 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.718072 4727 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.718102 4727 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.719380 4727 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.719443 4727 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.749456 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.768957 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772687 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772712 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772739 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772839 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772893 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772920 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773020 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773260 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773413 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773496 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773548 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773700 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773770 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774815 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.780280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.787151 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.792827 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.792996 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.799799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.803178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.820189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.836728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.852962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860142 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860168 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860295 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860420 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.864252 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.865011 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.865788 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.866458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.867175 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.867727 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.868320 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.869999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.870008 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.871128 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.872079 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.872669 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.873686 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.875636 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.876548 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.877572 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.878208 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.879208 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.880190 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.880189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.882223 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.884717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.884779 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.885481 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.902365 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.902945 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.903644 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.910362 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3694c5b_19cf_464e_90b7_8e719d3a0d11.slice/crio-eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795 WatchSource:0}: Error finding container eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795: Status 404 returned error can't find the container with id eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795 Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.910457 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.911371 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.914682 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916082 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916667 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.918050 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.918520 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.922895 4727 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.923002 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.924637 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.926073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.936424 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.943982 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.958693 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.967729 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.968593 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.969721 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.970386 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.974536 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.975361 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.976369 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.977037 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.977891 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.978410 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.979254 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.979975 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.981990 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.982526 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.983892 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.994457 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.995146 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996527 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996982 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.997832 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.000199 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"db1d5b9079c5ef9d075d8b48f59a077f78b84a728a96d7c81b25ddf23e3d0652"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.001164 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerStarted","Data":"eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006044 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006607 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006745 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006967 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.007821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.007821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.014698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlpv5" event={"ID":"d335f7f5-7ede-4146-9ecc-f0718b547d43","Type":"ContainerStarted","Data":"95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.014744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlpv5" event={"ID":"d335f7f5-7ede-4146-9ecc-f0718b547d43","Type":"ContainerStarted","Data":"661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.030791 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.042351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.051056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.051366 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.053941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"81c1c51202da312ce03669d5c060485af0c383cec8c55724177a3bab0a529fb9"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.055580 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.075158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083609 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083762 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083818 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083955 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.104046 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.145787 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.167735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187070 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187098 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187255 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187259 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187321 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187739 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187768 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187966 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188173 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188819 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.189404 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.189455 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.191166 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.199635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.209345 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.210925 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.226573 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.249798 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.266537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.284231 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.299872 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.316856 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.331684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.344772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.344928 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: W0109 10:46:15.356867 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33bb3d7e_6f5b_4a7b_b2c7_b04fb8e20e40.slice/crio-597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc WatchSource:0}: Error finding container 597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc: Status 404 returned error can't find the container with id 597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.382689 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.397574 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.413639 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.431340 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.451615 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.469183 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.482008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.501790 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.515127 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.525382 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.528035 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.530225 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.543242 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.553408 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.556076 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.569366 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.603771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.614426 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.662983 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.727171 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.747987 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.814623 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.875601 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.921663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.042913 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.065728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.065799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067648 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" exitCode=0 Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.070598 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.075726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.078476 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1" exitCode=0 Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.079673 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.085946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.086267 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.094858 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.113861 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.127441 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.144726 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.167591 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.184390 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.184480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.198893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.212352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.226825 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.236345 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238693 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.243091 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.257151 4727 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.257447 4727 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.258519 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259131 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.278084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.278968 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282581 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.295803 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.296362 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.304965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305157 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.324934 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.327587 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329232 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.345590 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.351581 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.369467 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.374771 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.374894 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.383864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.398646 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.411091 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.443104 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483111 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.499432 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.531113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.549130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hg5sh"] Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.549679 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.553004 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.553568 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.554073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.554205 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.556307 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.570699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.582415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585659 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.597818 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.606316 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606560 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.60648971 +0000 UTC m=+26.056394491 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.606720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606880 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606939 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.606932302 +0000 UTC m=+26.056837083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.612324 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.644896 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.686246 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.688013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.688024 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707165 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707560 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707810 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707835 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707849 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707906 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.707885215 +0000 UTC m=+26.157789996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707938 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707957 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707969 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707979 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.708013 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.708000839 +0000 UTC m=+26.157905620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.708166 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.708109702 +0000 UTC m=+26.158014483 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.722189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.765437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790123 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.803992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808446 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.809655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.853922 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859889 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859974 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859920 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860088 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.860122 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860216 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860287 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.893012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.893024 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.903438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.931047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: W0109 10:46:16.942963 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32de8b71_676d_47ed_a5e4_48737247937e.slice/crio-8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c WatchSource:0}: Error finding container 8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c: Status 404 returned error can't find the container with id 8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.945416 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.988530 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006951 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.023878 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.062584 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.084039 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hg5sh" event={"ID":"32de8b71-676d-47ed-a5e4-48737247937e","Type":"ContainerStarted","Data":"8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088655 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.092745 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad" exitCode=0 Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.093888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.112381 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.145131 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.183905 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.213977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.226900 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.264886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.303025 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.345607 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.383316 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.418950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.425438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.463890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.505186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522492 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.540807 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.584653 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.622014 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624962 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.680305 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.705367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729245 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.744447 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.833622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937588 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041844 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.098362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hg5sh" event={"ID":"32de8b71-676d-47ed-a5e4-48737247937e","Type":"ContainerStarted","Data":"a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.101452 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04" exitCode=0 Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.101533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.116770 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.134668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.150998 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.167304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.182582 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.195419 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.212361 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.229669 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.242749 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251764 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.259151 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.273783 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.291765 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.303263 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.323650 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.345227 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.383624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.422376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.465423 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.512557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.548883 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.585347 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.630579 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665208 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665485 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.704300 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.746165 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.768975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769100 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.785962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.824925 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.859901 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.860017 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.860130 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.863445 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871364 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975428 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.079028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.107748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.111314 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8" exitCode=0 Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.111366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.130352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.152639 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.168265 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.180673 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181567 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.196306 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.207086 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.228044 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.243478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.258914 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.274542 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.306170 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.343689 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.386152 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388433 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.428377 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491266 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595153 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.697955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698053 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.913981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914094 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017499 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.120236 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37" exitCode=0 Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.120302 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.137141 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.157573 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.170373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.188409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.204026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.217876 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221936 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.228470 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.248658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.269703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.286971 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.303675 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.319651 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324220 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.336307 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.351439 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427395 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529786 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632737 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.653389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.653700 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.653661098 +0000 UTC m=+34.103565879 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.653770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.653973 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.654059 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.654040479 +0000 UTC m=+34.103945260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754686 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754786 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754863 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754895 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754920 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754897 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.754867778 +0000 UTC m=+34.204772749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.754981961 +0000 UTC m=+34.204886922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755158 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755213 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755227 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755310 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.755285391 +0000 UTC m=+34.205190172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859375 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859574 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859803 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.951619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.951943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056391 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.132087 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d" exitCode=0 Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.132612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.158370 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164720 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.177612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.194895 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.216674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.234775 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.250235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.267939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268060 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268661 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.287131 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.300687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.318195 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.331767 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.347450 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.358583 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.375084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474107 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783169 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092751 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.140264 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.141956 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.141995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.155256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerStarted","Data":"8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.160244 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.168533 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.172598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.173228 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.188631 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.205677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.225079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.240748 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.254271 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.268041 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.285355 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.297707 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298781 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.316462 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.330778 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.344983 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.361095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.377152 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.392291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401448 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.405480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.419366 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.434862 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.454391 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.473541 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.484618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.498629 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503813 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.515138 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.530003 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.543046 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.564533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.581297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709956 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812547 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.859881 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.859977 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860038 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.860146 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860396 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.916986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020996 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.158196 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431907 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.534959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638399 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848883 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054318 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157192 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.163645 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.167573 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" exitCode=1 Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.167630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.168399 4727 scope.go:117] "RemoveContainer" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.185222 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.199606 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.214150 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.228533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.249349 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260931 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.266218 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.282241 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.296415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.311884 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.323673 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.342424 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.355376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.393173 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.411963 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467901 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571135 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673427 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775615 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859668 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.859777 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.859884 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.860062 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.874609 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877684 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.886437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.906178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.920960 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.934954 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.954111 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.968674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.982190 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.999207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.014176 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.031256 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.051083 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.065795 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.078367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.174351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.178365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.178484 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186293 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.194610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.209617 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.234478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.251349 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.265666 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.282557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.298699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.313007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.330140 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.346007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.366989 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.382159 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391671 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.397784 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.407026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494584 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596945 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699693 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.007904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008304 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111831 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.185052 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.185962 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189474 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" exitCode=1 Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189607 4727 scope.go:117] "RemoveContainer" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.190310 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.190494 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.208197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.223503 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.228235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.247977 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.268021 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.283928 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.297249 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.313732 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318122 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.329468 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.342905 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.356270 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.372919 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.388837 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.403725 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420480 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.434155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.450964 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.469572 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.486624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.503047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.517853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.522962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.530463 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.551429 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.565329 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.578994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.599014 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.615054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626237 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.632782 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.647565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.661864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.719001 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.735775 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.756622 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.761040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.761174 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.778268 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782888 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.800174 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803987 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.817775 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.817932 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859587 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859604 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859750 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.859924 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.860049 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.860174 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.195797 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.200142 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:27 crc kubenswrapper[4727]: E0109 10:46:27.200337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.216104 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.228422 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236439 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.261835 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.279851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.296250 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.315962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.330583 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343141 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.348112 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.368858 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.384269 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.399949 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.415296 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.423057 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg"] Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.423772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.426723 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.426942 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.430417 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.440980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.443867 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.445956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.445997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446039 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.458079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.471771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.486047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.498133 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.515109 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.532328 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.543621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.543671 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548796 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.550395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.551952 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.563117 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.566458 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.580836 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.595853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.611398 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.626756 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.642280 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.659272 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.691282 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.741440 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: W0109 10:46:27.757449 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50be6d5b_675b_4837_ba20_6d6c75a363d6.slice/crio-bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1 WatchSource:0}: Error finding container bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1: Status 404 returned error can't find the container with id bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1 Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857964 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960413 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062913 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.153897 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.154377 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.154438 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.167057 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.182149 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.193405 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203608 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203621 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.207279 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.224928 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.239407 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.249750 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.250039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.254932 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.272268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.291893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.308291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.321914 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.343888 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.350775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.350842 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.350968 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.351023 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.851004137 +0000 UTC m=+34.300908918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.358238 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.377947 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.391321 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.402597 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.418385 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.430393 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.443161 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.458482 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.471764 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.484772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.497451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.512163 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.526300 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.538703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.549762 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.565844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.584209 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.595801 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.606008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.614727 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.753283 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.753390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753502 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753563 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.753550793 +0000 UTC m=+50.203455574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753607 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.753602484 +0000 UTC m=+50.203507265 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.853990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854047 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854099 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854208 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854262 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854216 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854326 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.8542933 +0000 UTC m=+50.304198121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854341 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854371 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854253 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854435 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:29.854415383 +0000 UTC m=+35.304320264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854302 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854450 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.854443524 +0000 UTC m=+50.304348425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854459 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854575 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.854492515 +0000 UTC m=+50.304397346 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860011 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860007 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860211 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888223 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196677 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.298906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.298978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299054 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.401973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402128 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608856 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815226 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.859987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.860140 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.865865 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.866133 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.866261 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:31.866223618 +0000 UTC m=+37.316128579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.919003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.919014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.021877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022542 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.126078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.126188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.229021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.229036 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332669 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.537997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.539186 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.641930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745799 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860291 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860388 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860426 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860718 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054716 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158127 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261803 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468236 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.778015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.778027 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.859294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.859559 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881178 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.885494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.885696 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.885782 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:35.885759763 +0000 UTC m=+41.335664574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984392 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.191007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.191026 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397459 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.860062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.860105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.860246 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.860796 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.861003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911199 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117772 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322841 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530183 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.735979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736070 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.839012 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.860251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:33 crc kubenswrapper[4727]: E0109 10:46:33.860403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045352 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354171 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665685 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860368 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860625 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860697 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.892857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.912798 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.925202 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.940899 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.963890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973214 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.977414 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.991472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.008862 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.027702 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.045072 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.063339 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.076599 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.096010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.100577 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.101500 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.101750 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.111584 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.125701 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.140731 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487833 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591282 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.859650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.859795 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.932380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.932594 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.932689 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:43.932663563 +0000 UTC m=+49.382568384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004742 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107243 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210500 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416618 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520226 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622637 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725679 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829385 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860273 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.934130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.934321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974342 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:36.999888 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:36Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005717 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.020981 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025578 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.044156 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049497 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.067224 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071884 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.088954 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.089185 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091473 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297891 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401345 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504465 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.607898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711859 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814402 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.859448 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.859782 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919364 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228097 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228189 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330758 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536982 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.859891 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.860102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.860326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860538 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860669 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057750 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.263961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264048 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572812 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676102 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.778959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779040 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.859931 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:39 crc kubenswrapper[4727]: E0109 10:46:39.860146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983280 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499350 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705798 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.808012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.808023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.859755 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859780 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.859952 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.860034 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018338 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122087 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225246 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432959 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536651 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536669 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639794 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742836 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.847966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848063 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.860301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:41 crc kubenswrapper[4727]: E0109 10:46:41.860579 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.951011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.951021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054572 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467677 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570773 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674196 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777449 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860104 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860184 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860404 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.983016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.983033 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190771 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294528 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.860074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:43 crc kubenswrapper[4727]: E0109 10:46:43.860312 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.020019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.020031 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.027660 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.027849 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.027954 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:00.027921215 +0000 UTC m=+65.477826196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123135 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226161 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.422610 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432761 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.436966 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.437204 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.459882 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.475268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.490230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.507053 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.520326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536731 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.537106 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.553007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.570418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.588198 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.602066 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.615155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.631540 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639719 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.643450 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.658045 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.673904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.835169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835378 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.835348824 +0000 UTC m=+82.285253605 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.835431 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835632 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835685 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.835677023 +0000 UTC m=+82.285581804 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845758 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860128 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860128 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860261 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860538 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.881393 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.894283 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.915825 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.929758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936487 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936530 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936544 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936594 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936609 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.936589925 +0000 UTC m=+82.386494696 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936864 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936929 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936958 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936961 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.936932614 +0000 UTC m=+82.386837545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.937052 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.937031478 +0000 UTC m=+82.386936429 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948146 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.951896 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.970128 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.986096 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.003751 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.019981 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.030250 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084557 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.087494 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.101995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.115086 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.128679 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.144853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.159526 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.173355 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290755 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602650 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705912 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.859755 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:45 crc kubenswrapper[4727]: E0109 10:46:45.859951 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014121 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426186 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529218 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632222 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.733993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734070 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836835 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859691 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859755 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.859819 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.859904 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859982 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.860226 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939300 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041981 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272829 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.287654 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294247 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.312250 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.317985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318189 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.339322 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.345002 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.359339 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363991 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.382570 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.382777 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694819 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.859734 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.860215 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.209598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210268 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313313 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416905 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.519945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520071 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.622979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.727014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.727138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830232 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.859887 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.859921 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.860163 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.860570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.860949 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.861141 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.861703 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932763 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035350 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138349 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.282915 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.285834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.286362 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.312194 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.332446 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344179 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.350622 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.363038 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.378083 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.403609 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.426791 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.446005 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.464851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.485572 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.500218 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.514904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.530774 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.546258 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.551017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.551028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.564890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.583992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.606285 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653735 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.757579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758206 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.859209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:49 crc kubenswrapper[4727]: E0109 10:46:49.859400 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.861020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.861032 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066544 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.293045 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.293611 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296741 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" exitCode=1 Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296839 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.298004 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.298296 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.318718 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.335601 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.350877 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.366230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375705 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.381089 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.395216 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.415712 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.433290 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.449835 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.463654 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.478995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.491885 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.505722 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.529050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.541765 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.565537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582179 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582935 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.686017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.686030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788538 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859359 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.859607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.859825 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.860034 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994987 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201816 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.301462 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303726 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.306086 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:46:51 crc kubenswrapper[4727]: E0109 10:46:51.306247 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.319955 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.333063 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.346658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.359249 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.373657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.386574 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.397846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406966 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.412643 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.425703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.439371 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.451995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.465271 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.476846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.487457 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.499309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.509982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.511991 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.532018 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.618332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.618356 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721160 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824918 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:51 crc kubenswrapper[4727]: E0109 10:46:51.860485 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928916 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.137020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.137042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.342997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343169 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.447007 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549856 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755612 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858801 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.859900 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860094 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.860414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860562 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.860734 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860862 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961459 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.064010 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270488 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373857 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581153 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684621 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.787940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.859890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:53 crc kubenswrapper[4727]: E0109 10:46:53.860101 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.993955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994103 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098320 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201741 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.305009 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408638 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511692 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.859720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.859835 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.859938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.860075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.860218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.860300 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.875378 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.889486 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.906857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.920744 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.938728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.959274 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.974985 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.991504 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.007224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.022347 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025670 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.037345 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.054721 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.067735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.080252 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.093752 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.104738 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.124577 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.137970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.242185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345379 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551891 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654850 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.859392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:55 crc kubenswrapper[4727]: E0109 10:46:55.859568 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860868 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963968 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066947 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169818 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272176 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374989 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580696 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683371 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786591 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860057 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.860826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.861041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.861170 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891552 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995293 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200679 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.304015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510707 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614817 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.671996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.684856 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.689892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.704570 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710487 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.723316 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.742539 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.746929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.746989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747058 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.761873 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.762017 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.763999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.859632 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.859873 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.866998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278910 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484401 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587857 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691451 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795112 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.859998 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.860244 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.860293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:58.999990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.000004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.102964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205749 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308826 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411693 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.514489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515043 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618242 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720654 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823781 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.859975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:59 crc kubenswrapper[4727]: E0109 10:46:59.860187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926614 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029341 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.108854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.109085 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.109221 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:32.109191152 +0000 UTC m=+97.559095933 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132385 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235150 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.856980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857102 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860329 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860371 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860399 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860531 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.959926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063206 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.166019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.166033 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344092 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344543 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" exitCode=1 Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.345197 4727 scope.go:117] "RemoveContainer" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.362352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372871 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.378175 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.389497 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.406497 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.422005 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.438687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.453631 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.468169 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.481684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.493475 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.505770 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.517559 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.544854 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.559332 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.572251 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578541 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.586501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.598866 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681629 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785420 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.859681 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:01 crc kubenswrapper[4727]: E0109 10:47:01.859890 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888897 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198275 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300996 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.350743 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.350826 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.368850 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.383050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.398716 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.412501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.425651 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.440156 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.465368 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.479318 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.489197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.503192 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508652 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.519469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.532090 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.549958 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.561941 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.581160 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.598223 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.610380 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611477 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714632 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818123 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859831 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859988 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860122 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860263 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920793 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024147 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126906 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229372 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332570 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538207 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745317 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.848059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.848165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.860075 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:03 crc kubenswrapper[4727]: E0109 10:47:03.860187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054663 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158134 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261674 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466960 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569613 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672365 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775278 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.859990 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.860109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860164 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.860224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860311 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860534 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878901 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.880461 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.896403 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.908449 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.924112 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.937864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.958043 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.970373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981895 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.983728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.997657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.013545 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.025619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.037374 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.056158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.071328 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.082677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.096001 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.111262 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290645 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.496023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598328 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701336 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.859703 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:05 crc kubenswrapper[4727]: E0109 10:47:05.859928 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.906958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907047 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009336 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113319 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215878 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.318974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319074 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.421987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422078 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.732956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860157 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860287 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.860362 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860296 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.860618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.861084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.861450 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.861657 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939977 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147453 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456362 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559052 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662422 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766214 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.859318 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:07 crc kubenswrapper[4727]: E0109 10:47:07.859593 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045308 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.059555 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065705 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.079826 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.101069 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.125271 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129833 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.143887 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.144041 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249741 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352921 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558822 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764849 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859948 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859924 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860106 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860251 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860330 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383380 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692967 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796394 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.859904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:09 crc kubenswrapper[4727]: E0109 10:47:09.860150 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.001981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104867 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.227943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.227999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538602 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745481 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849104 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859433 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859582 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859793 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952575 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158835 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261411 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364628 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468949 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571824 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.777991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778074 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.859319 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:11 crc kubenswrapper[4727]: E0109 10:47:11.859592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086619 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188968 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.291953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394446 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601613 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704455 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859850 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859932 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859981 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.019972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020079 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330566 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.537016 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640379 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743979 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847636 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.860004 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:13 crc kubenswrapper[4727]: E0109 10:47:13.860298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950539 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.053905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.053986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054041 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157817 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.259753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.363961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466896 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569874 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.672994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673113 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.875757 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.886893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.909681 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.922000 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.936012 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.951548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.965214 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981630 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.996544 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.009610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.024844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.040032 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.054071 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.067395 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.082604 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.085009 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.097753 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.110611 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187816 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.289948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.496953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599818 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702873 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.860192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:15 crc kubenswrapper[4727]: E0109 10:47:15.860383 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116469 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323630 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427697 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.633789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634344 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737540 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840968 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.841308 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860432 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860479 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860687 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860828 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.876975 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.877146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877230 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.87718594 +0000 UTC m=+146.327090711 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877309 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877406 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.877384166 +0000 UTC m=+146.327288947 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944924 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979084 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979145 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979169 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979185 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979221 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979190934 +0000 UTC m=+146.429095865 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979253 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979242546 +0000 UTC m=+146.429147327 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979594 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979645 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979659 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979737 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979716882 +0000 UTC m=+146.429621663 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047714 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150427 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.357003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.357013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562479 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664885 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.859936 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:17 crc kubenswrapper[4727]: E0109 10:47:17.860471 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.860851 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075864 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178906 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217159 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.231904 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236778 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.251228 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.271043 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281561 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.308124 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.332423 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.332626 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.334991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.412553 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.414791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.415815 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.434224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438565 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.452065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.470542 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.481834 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.496976 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.512617 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.526692 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540890 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.546958 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.563799 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.580750 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.594772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.611139 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.623589 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.637110 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.652120 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.664309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.685562 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849397 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859803 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.860041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.860517 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.861201 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.873931 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055367 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158309 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.420887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.421651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424402 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" exitCode=1 Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424528 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424624 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.425341 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:19 crc kubenswrapper[4727]: E0109 10:47:19.425572 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.439400 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.450447 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.464192 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.477814 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.492548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.505811 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.516989 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.529739 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.542050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.555050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.566054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568936 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.578220 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.591011 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.600596 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.614084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.626413 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.635596 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.654699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.671972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775667 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.859254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:19 crc kubenswrapper[4727]: E0109 10:47:19.859408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878331 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980883 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083730 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185869 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391875 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.430847 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.435275 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.435445 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.454637 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.470409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.483824 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494738 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.499943 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.514488 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.528358 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.543254 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.556134 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.570362 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.583230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.595326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602871 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.615710 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.627344 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.647746 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.660543 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.674183 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.687353 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.700149 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808461 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.860028 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860108 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860302 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.860478 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911728 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324224 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634913 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737822 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.860253 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:21 crc kubenswrapper[4727]: E0109 10:47:21.860734 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944608 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047212 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150679 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253894 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.769882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770853 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860131 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860275 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.860330 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860363 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.861411 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.861844 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873450 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183597 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183643 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287560 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390421 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597343 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699975 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.859812 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:23 crc kubenswrapper[4727]: E0109 10:47:23.860065 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113922 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217684 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838876 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860270 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860311 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860463 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860670 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.880610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.894065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.912318 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.925986 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942124 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942118 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.962500 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.976848 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.991177 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.003492 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.016840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.031959 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.044972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.047540 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.058056 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.069684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.080973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.093469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.103663 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.114684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251942 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354229 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561594 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664645 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767541 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.859560 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:25 crc kubenswrapper[4727]: E0109 10:47:25.859916 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870228 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179579 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.282956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385910 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.694004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796948 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859745 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859866 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900323 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003646 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.211014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314222 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519326 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623250 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.726003 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.828928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.828990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829041 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.859345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:27 crc kubenswrapper[4727]: E0109 10:47:27.859587 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933438 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036576 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139886 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.449014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.508023 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513187 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.528665 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533606 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.548935 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.572549 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.576979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577086 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.591766 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.591886 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697383 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859537 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.859728 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859807 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.859919 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.860030 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904422 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212146 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.316035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.316233 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421430 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523748 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626442 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833163 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.859409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:29 crc kubenswrapper[4727]: E0109 10:47:29.859600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936331 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039946 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348167 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451374 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554311 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859879 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859892 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860110 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860354 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862474 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068939 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.173000 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.276016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.276030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.585965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586059 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689941 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792900 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.859941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:31 crc kubenswrapper[4727]: E0109 10:47:31.860170 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896967 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000283 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.148667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.148831 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.148920 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.148893841 +0000 UTC m=+161.598798622 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308804 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721878 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825386 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860857 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927908 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.135047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.135130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238463 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341661 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.546971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547699 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.754814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755527 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.858637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:33 crc kubenswrapper[4727]: E0109 10:47:33.859614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.860584 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:33 crc kubenswrapper[4727]: E0109 10:47:33.860854 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273388 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376788 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479344 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.581936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860381 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860631 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860769 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.881052 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.895243 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.912899 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.927102 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.942205 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.957963 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.971357 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.985022 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994855 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.000196 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.021600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.035802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.048756 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.064100 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.077158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.098079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.113600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.128802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.149015 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200377 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.304002 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.406992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407556 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510649 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819888 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.860338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:35 crc kubenswrapper[4727]: E0109 10:47:35.860683 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922828 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026431 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129711 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232798 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335911 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644451 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.746970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747069 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860040 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860353 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952950 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055921 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162387 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369771 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473486 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576289 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679774 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783640 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.859794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:37 crc kubenswrapper[4727]: E0109 10:47:37.859985 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095223 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405970 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611744 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715474 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.859937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.860009 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.860322 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.920556 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw"] Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.921211 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.923493 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.923578 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.925158 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.925201 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.980200 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qlpv5" podStartSLOduration=85.980175555 podStartE2EDuration="1m25.980175555s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:38.959058435 +0000 UTC m=+104.408963216" watchObservedRunningTime="2026-01-09 10:47:38.980175555 +0000 UTC m=+104.430080336" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.028692 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.028666526 podStartE2EDuration="55.028666526s" podCreationTimestamp="2026-01-09 10:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.012245908 +0000 UTC m=+104.462150709" watchObservedRunningTime="2026-01-09 10:47:39.028666526 +0000 UTC m=+104.478571307" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031470 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031759 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031940 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.077486 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.077458108 podStartE2EDuration="1m27.077458108s" podCreationTimestamp="2026-01-09 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.076953702 +0000 UTC m=+104.526858493" watchObservedRunningTime="2026-01-09 10:47:39.077458108 +0000 UTC m=+104.527362879" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.122568 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podStartSLOduration=86.1225448 podStartE2EDuration="1m26.1225448s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.106143442 +0000 UTC m=+104.556048253" watchObservedRunningTime="2026-01-09 10:47:39.1225448 +0000 UTC m=+104.572449601" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133790 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.134572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.138823 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" podStartSLOduration=86.138801734 podStartE2EDuration="1m26.138801734s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.124454462 +0000 UTC m=+104.574359263" watchObservedRunningTime="2026-01-09 10:47:39.138801734 +0000 UTC m=+104.588706515" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.142332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.156714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.158958 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.158938662 podStartE2EDuration="1m27.158938662s" podCreationTimestamp="2026-01-09 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.15887496 +0000 UTC m=+104.608779761" watchObservedRunningTime="2026-01-09 10:47:39.158938662 +0000 UTC m=+104.608843443" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.171730 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.171707133 podStartE2EDuration="21.171707133s" podCreationTimestamp="2026-01-09 10:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.1709787 +0000 UTC m=+104.620883481" watchObservedRunningTime="2026-01-09 10:47:39.171707133 +0000 UTC m=+104.621611904" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.211058 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-57zpr" podStartSLOduration=86.21102918 podStartE2EDuration="1m26.21102918s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.210283736 +0000 UTC m=+104.660188527" watchObservedRunningTime="2026-01-09 10:47:39.21102918 +0000 UTC m=+104.660933961" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.222436 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hg5sh" podStartSLOduration=86.222408557 podStartE2EDuration="1m26.222408557s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.221184557 +0000 UTC m=+104.671089348" watchObservedRunningTime="2026-01-09 10:47:39.222408557 +0000 UTC m=+104.672313338" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.234452 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" podStartSLOduration=85.234425783 podStartE2EDuration="1m25.234425783s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.233737561 +0000 UTC m=+104.683642352" watchObservedRunningTime="2026-01-09 10:47:39.234425783 +0000 UTC m=+104.684330564" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.237973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.504134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" event={"ID":"8f12ea37-6ec6-49d9-8870-27b7f320fa1a","Type":"ContainerStarted","Data":"b6b3d6c929e00da3855deb688b967f6cf7cf7aa03befb8e5e7f646aea8e801ca"} Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.504203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" event={"ID":"8f12ea37-6ec6-49d9-8870-27b7f320fa1a","Type":"ContainerStarted","Data":"c4ab4ad56d9565ca1854ad66c2c2ff886669688a180d121c145b23dec5e1334a"} Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.859902 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:39 crc kubenswrapper[4727]: E0109 10:47:39.860085 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860204 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860395 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860613 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.859698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:41 crc kubenswrapper[4727]: E0109 10:47:41.860063 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.876491 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" podStartSLOduration=88.876469415 podStartE2EDuration="1m28.876469415s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.518996709 +0000 UTC m=+104.968901490" watchObservedRunningTime="2026-01-09 10:47:41.876469415 +0000 UTC m=+107.326374196" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.876731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859834 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.859856 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859607 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.860976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.860992 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:43 crc kubenswrapper[4727]: I0109 10:47:43.859559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:43 crc kubenswrapper[4727]: E0109 10:47:43.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.860207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.860242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.862188 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862237 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862307 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.889736 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.889710192 podStartE2EDuration="3.889710192s" podCreationTimestamp="2026-01-09 10:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:44.887979987 +0000 UTC m=+110.337884758" watchObservedRunningTime="2026-01-09 10:47:44.889710192 +0000 UTC m=+110.339614973" Jan 09 10:47:45 crc kubenswrapper[4727]: I0109 10:47:45.860097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:45 crc kubenswrapper[4727]: E0109 10:47:45.861273 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:45 crc kubenswrapper[4727]: I0109 10:47:45.861144 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:45 crc kubenswrapper[4727]: E0109 10:47:45.861671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859737 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532011 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532770 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532834 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" exitCode=1 Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532871 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532908 4727 scope.go:117] "RemoveContainer" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.533624 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:47:47 crc kubenswrapper[4727]: E0109 10:47:47.533892 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.860022 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:47 crc kubenswrapper[4727]: E0109 10:47:47.860639 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.537348 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859490 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.859650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859712 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859754 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.860043 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.860158 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:49 crc kubenswrapper[4727]: I0109 10:47:49.859345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:49 crc kubenswrapper[4727]: E0109 10:47:49.859556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861372 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:51 crc kubenswrapper[4727]: I0109 10:47:51.859542 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:51 crc kubenswrapper[4727]: E0109 10:47:51.859752 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859664 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.859963 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.859801 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.860132 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:53 crc kubenswrapper[4727]: I0109 10:47:53.860252 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:53 crc kubenswrapper[4727]: E0109 10:47:53.860464 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.807959 4727 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.859845 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.859944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.860094 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861005 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861222 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861336 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.949922 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:47:55 crc kubenswrapper[4727]: I0109 10:47:55.859594 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:55 crc kubenswrapper[4727]: E0109 10:47:55.860481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860410 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.860657 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.860837 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.861031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.862105 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.862290 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:57 crc kubenswrapper[4727]: I0109 10:47:57.859468 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:57 crc kubenswrapper[4727]: E0109 10:47:57.859702 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860289 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860646 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:59 crc kubenswrapper[4727]: I0109 10:47:59.859918 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:59 crc kubenswrapper[4727]: E0109 10:47:59.860272 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:59 crc kubenswrapper[4727]: I0109 10:47:59.860391 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:47:59 crc kubenswrapper[4727]: E0109 10:47:59.950908 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.582793 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.582882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08"} Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.859826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.859975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.860072 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860157 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860271 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:01 crc kubenswrapper[4727]: I0109 10:48:01.859955 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:01 crc kubenswrapper[4727]: E0109 10:48:01.860134 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859498 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859687 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859838 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:03 crc kubenswrapper[4727]: I0109 10:48:03.859296 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:03 crc kubenswrapper[4727]: E0109 10:48:03.859802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.859853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.861365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.861392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861435 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861526 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861751 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.951764 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:05 crc kubenswrapper[4727]: I0109 10:48:05.859813 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:05 crc kubenswrapper[4727]: E0109 10:48:05.859982 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.859897 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.859968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860136 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.860230 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860438 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:07 crc kubenswrapper[4727]: I0109 10:48:07.860074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:07 crc kubenswrapper[4727]: E0109 10:48:07.860249 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859771 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.859968 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.860072 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.860122 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:09 crc kubenswrapper[4727]: I0109 10:48:09.860141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:09 crc kubenswrapper[4727]: E0109 10:48:09.860359 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:09 crc kubenswrapper[4727]: E0109 10:48:09.953749 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860333 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860357 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860381 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860827 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:11 crc kubenswrapper[4727]: I0109 10:48:11.859934 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:11 crc kubenswrapper[4727]: E0109 10:48:11.860771 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:11 crc kubenswrapper[4727]: I0109 10:48:11.861450 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.627043 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.630861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.631430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.662840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podStartSLOduration=118.662811521 podStartE2EDuration="1m58.662811521s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:12.662372168 +0000 UTC m=+138.112276969" watchObservedRunningTime="2026-01-09 10:48:12.662811521 +0000 UTC m=+138.112716322" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860223 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860415 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860500 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.945119 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.945258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.945372 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860483 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862064 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862590 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.863202 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.954683 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861280 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860353 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860328 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861685 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861492 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860484 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860539 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861895 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.862028 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861706 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860016 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860145 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860229 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862375 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862432 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862432 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862877 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862887 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.863238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.937403 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:20 crc kubenswrapper[4727]: E0109 10:48:20.937595 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:50:22.93756981 +0000 UTC m=+268.387474591 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.937655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.944466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038921 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.040031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.042099 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.042143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.176486 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.190362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.196003 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.453613 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173 WatchSource:0}: Error finding container 72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173: Status 404 returned error can't find the container with id 72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173 Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.661729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9038215f2a461aa23c14abe79af74b1e1ca6367c7d0bf500f4a12fff4b350c2d"} Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.661800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173"} Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.662083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.689001 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34 WatchSource:0}: Error finding container 3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34: Status 404 returned error can't find the container with id 3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34 Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.689350 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4 WatchSource:0}: Error finding container 57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4: Status 404 returned error can't find the container with id 57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4 Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.666999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a4850ec6daf1ded4182b2f9b0755746e960aff24eb8c1697b770c06b36c95b3a"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.667427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.668129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02f65a319a9db16c5d996291602bfa37d8e2eae9f31f0f651142ed45782e92df"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.668178 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4"} Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.927992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.970666 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.971267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972672 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972921 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.973406 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974258 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.975181 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.975575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.976195 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.976495 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977088 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977120 4727 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977142 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977157 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977198 4727 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977209 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977228 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977210 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977242 4727 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977252 4727 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977264 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977285 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977296 4727 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977306 4727 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977321 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977333 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977346 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977358 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978444 4727 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978475 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978540 4727 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978562 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978631 4727 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978655 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978780 4727 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978806 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979241 4727 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979272 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979359 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979382 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979438 4727 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979454 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979564 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979591 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979661 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979679 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979746 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979771 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979816 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979835 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979895 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979919 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979977 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979994 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980041 4727 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980057 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980101 4727 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980119 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980167 4727 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980189 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980235 4727 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980252 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980296 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980312 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980358 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980374 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980593 4727 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: secrets "cluster-image-registry-operator-dockercfg-m4qtx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980619 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-image-registry-operator-dockercfg-m4qtx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.980796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981402 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981736 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981939 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.982362 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.982555 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.983175 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.986024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.986698 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.987954 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.989614 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.992103 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.993213 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.996964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.997332 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.997772 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.998031 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019477 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019797 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019984 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020100 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020288 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020415 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020549 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020671 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020993 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021154 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021388 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021495 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021633 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.023775 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.024326 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.025293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.025791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.027780 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.028282 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034619 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034703 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034732 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.038543 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.038759 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041064 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041277 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041428 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041548 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041682 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041967 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.042660 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.045605 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.045942 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046887 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046972 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047120 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047235 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046891 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.049927 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.055723 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056410 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.064880 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065157 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065521 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065822 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065858 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.066002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056972 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.075942 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.076673 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zcx2c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.077302 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.077539 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.079948 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.087281 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.090217 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.095772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.102952 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.128951 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129102 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129330 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.128949 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129717 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129754 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.142498 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.142947 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150655 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150711 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150744 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150836 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150904 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151100 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151133 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151154 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151356 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151413 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151433 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151489 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151605 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.154708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.155425 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.155878 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156027 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156199 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156351 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156487 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156795 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157037 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157177 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157362 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.158396 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.159239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162189 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162747 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162951 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163033 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163073 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163381 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163823 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163951 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163985 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164070 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164075 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164263 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164399 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164753 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164846 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165051 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165082 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165587 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166560 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166807 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168093 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168970 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169004 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169421 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170383 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170441 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170387 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.171097 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.171398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.172999 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.177644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.179923 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.181029 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183021 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183306 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183346 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183371 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183395 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183408 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183419 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183429 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.184570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.201496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.204259 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.204268 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.207981 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.212360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.217629 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.220034 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.222574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.224832 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.228692 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.230489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.231646 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.241050 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.244608 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.245884 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.247564 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.248958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.250556 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.251143 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.252306 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.252914 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.253366 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.254552 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255764 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256111 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256404 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256532 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256750 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256841 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256876 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256985 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257138 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257212 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258583 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.259317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.259978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260105 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260137 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260369 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261680 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.262049 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.262205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263003 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263575 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264368 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264936 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.266389 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.267857 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.269316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.270834 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.272698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.273718 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.273747 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.275304 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-99dfz"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.275537 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.276649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.276663 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.293832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.315528 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.333946 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359344 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.361608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.362237 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.363803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.373343 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.384076 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.394717 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.401056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.414756 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.422853 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.433803 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.454139 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.491641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.493963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.513238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.533163 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.554219 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.562346 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.574026 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.614957 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.634014 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.654438 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.663492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.673335 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.694312 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.714249 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.734449 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.753795 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.774723 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.794299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.814552 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.833607 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.853977 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.873429 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.883876 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:30 crc kubenswrapper[4727]: W0109 10:48:30.892269 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab289a6_8124_413b_88f7_0ef3e4523b94.slice/crio-9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279 WatchSource:0}: Error finding container 9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279: Status 404 returned error can't find the container with id 9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279 Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.894101 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.919771 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.933240 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.953924 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.973775 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.994770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.013791 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.033421 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.053693 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.073930 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.094224 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.113662 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.134160 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.153525 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156044 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156120 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156142 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.656119359 +0000 UTC m=+157.106024140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156212 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.656187051 +0000 UTC m=+157.106091832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.172413 4727 request.go:700] Waited for 1.005317273s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.173835 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.194318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.213806 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.233315 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252460 4727 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252503 4727 projected.go:194] Error preparing data for projected volume kube-api-access-vpmsk for pod openshift-controller-manager/controller-manager-879f6c89f-75slj: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252585 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.752562215 +0000 UTC m=+157.202466996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vpmsk" (UniqueName: "kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.253647 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259275 4727 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259314 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759304691 +0000 UTC m=+157.209209472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259277 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259445 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759437465 +0000 UTC m=+157.209342246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259610 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259710 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759700172 +0000 UTC m=+157.209604953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259742 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259887 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759878937 +0000 UTC m=+157.209783718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260381 4727 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260421 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760411873 +0000 UTC m=+157.210316654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260433 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260439 4727 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260465 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760455254 +0000 UTC m=+157.210360035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260479 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760470635 +0000 UTC m=+157.210375416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260481 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260501 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760495006 +0000 UTC m=+157.210399787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260862 4727 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260986 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760974589 +0000 UTC m=+157.210879360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.261023 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.261146 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.761136564 +0000 UTC m=+157.211041345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262665 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262698 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.762690539 +0000 UTC m=+157.212595320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262779 4727 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262985 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.762951767 +0000 UTC m=+157.212856548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.280079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.293755 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.313045 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.333060 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.353482 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.374399 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.394632 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.414276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.435367 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.454397 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.475134 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.494296 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.514135 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.534498 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.554185 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.574447 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.597324 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.614877 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.634469 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.653988 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.674877 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.677814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.678123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.694311 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701225 4727 generic.go:334] "Generic (PLEG): container finished" podID="fab289a6-8124-413b-88f7-0ef3e4523b94" containerID="54c508419749042efdc1048c1d26247b608cb174bd851eb22ff5ca550efb8308" exitCode=0 Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerDied","Data":"54c508419749042efdc1048c1d26247b608cb174bd851eb22ff5ca550efb8308"} Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerStarted","Data":"9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279"} Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.713433 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.734160 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.759977 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.773740 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779213 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779631 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779691 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779941 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.794219 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.814102 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.833766 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.854461 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.874679 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.893073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.971358 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.014364 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.034429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.053451 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.060862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.073795 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.149725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.168312 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.191841 4727 request.go:700] Waited for 1.931446613s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.210239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.228907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.248963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.254303 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.274576 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.294196 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.313360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.314175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.333424 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.345001 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.353911 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.353954 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.389913 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.401118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.413187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.433936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.454092 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.477497 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.481121 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.487997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.493827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496081 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497466 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497529 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499327 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499372 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.510923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.511197 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.011168245 +0000 UTC m=+158.461073016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.511579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512118 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512323 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.513067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.515765 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.548713 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.553078 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.554308 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.574655 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.594311 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.610931 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614309 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614329 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.614597 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.114545563 +0000 UTC m=+158.564450354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615448 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617437 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617609 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617715 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617790 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617874 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617961 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619129 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619877 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619903 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619997 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620068 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620265 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620315 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620333 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620552 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621379 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621543 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623335 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623974 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.624418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.625566 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.125541852 +0000 UTC m=+158.575446794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625759 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626884 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626932 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626968 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.627628 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.628010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.637334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.638186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.639525 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640267 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.641869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.643940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.644656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645622 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.651411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.651614 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.652820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.653297 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.656631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.657400 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.659191 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.659953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.660338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.660425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.662668 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: W0109 10:48:32.667133 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3f932b_fb41_4a2b_967b_a15de9606cbd.slice/crio-1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c WatchSource:0}: Error finding container 1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c: Status 404 returned error can't find the container with id 1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.674421 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.679439 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.679604 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.679578784 +0000 UTC m=+159.129483565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.685538 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.694606 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.696138 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.714020 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.717313 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.718690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.721210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerStarted","Data":"00c9abd5a627f7cbe2dec5c6f0e47f428f7df9864428e7a91b37a9fe46d111dc"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.721640 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.723314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"81d7f870b23c46598d90f879b8747361b96a7182a6c7b06a1380f8c2775bee8a"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.723360 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"c5bf64f73c69309b7c62b5a733aa637377bf02397c8db3c9a8c81adc87eec1be"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.724937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerStarted","Data":"b91fc4ab06ef577d9c4e0fad8710798e885460e768b3d9d37cb5205f9fe286fa"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.726069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" event={"ID":"1d3f932b-fb41-4a2b-967b-a15de9606cbd","Type":"ContainerStarted","Data":"1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.728547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zcx2c" event={"ID":"5789711a-8f11-41c1-ac8d-eb5e60d147a1","Type":"ContainerStarted","Data":"c077485a0a1a3d46e50df5741e29227c5c48f6004b354b76516d54bc4c53ebd2"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.728581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zcx2c" event={"ID":"5789711a-8f11-41c1-ac8d-eb5e60d147a1","Type":"ContainerStarted","Data":"44a05262b0d4443bc5c637943cdd990fe1d88cb57872329b1d23084ebccf53f1"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.735287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: W0109 10:48:32.739320 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7604b799_797e_4127_84cf_3f7e1c17dc87.slice/crio-f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55 WatchSource:0}: Error finding container f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55: Status 404 returned error can't find the container with id f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55 Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.746866 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.746957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.747330 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.247283254 +0000 UTC m=+158.697188035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747514 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747650 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747859 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748084 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748168 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748213 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748630 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748710 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748741 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748861 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748882 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748902 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749149 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749367 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749707 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749902 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750022 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750049 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.751030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.751107 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.752882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.753031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.753618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.754223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.754800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.754797 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.254780362 +0000 UTC m=+158.704685143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755756 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.757085 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.757118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.758470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.759068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.759693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760479 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760660 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.758450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.762757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.764849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.765002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.766778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769966 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.770019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.770072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.775161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.775762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.777063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.778122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.778389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.780056 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.780138 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.780114798 +0000 UTC m=+159.230019579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781155 4727 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781220 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781311 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.78118321 +0000 UTC m=+159.231087991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781485 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.781331774 +0000 UTC m=+159.231236735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781811 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781924 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.78190105 +0000 UTC m=+159.231805831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781809 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.782252 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.782234911 +0000 UTC m=+159.232139702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781577 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.782542 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.782530739 +0000 UTC m=+159.232435710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.781959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.803176 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.816698 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.841109 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.852851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.853690 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.353652818 +0000 UTC m=+158.803557599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.854165 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.854742 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.354733989 +0000 UTC m=+158.804638760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.862131 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.878618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.899282 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.915959 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.934180 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.948260 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.954646 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.958243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.961159 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.461112503 +0000 UTC m=+158.911017284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.965651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.975914 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.984748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.994521 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:32.998307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.055831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.060276 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.060817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.560791733 +0000 UTC m=+159.010696514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.059249 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.096042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.105766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.110646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.114318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.130402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.130800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.131774 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.153912 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.160797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.162183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.162402 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.662364947 +0000 UTC m=+159.112269728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.162809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.163355 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.663348896 +0000 UTC m=+159.113253677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.173894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.189642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.215016 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.230652 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.234277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.263734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.264443 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.764414566 +0000 UTC m=+159.214319347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.269081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.279077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.299209 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.305226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.308792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.310551 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.312837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.322726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.335677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.343021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.351206 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.351555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.368456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.368990 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.868966328 +0000 UTC m=+159.318871109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.382975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.392847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.399673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.419902 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.435179 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.465142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.466968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.469062 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.469552 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.969532062 +0000 UTC m=+159.419436843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.471625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.476757 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.481738 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.483311 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.492700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.492947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.494981 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:33 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:33 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:33 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.495031 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.506705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.511377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.515910 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.524602 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.535422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.553566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.554914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.569922 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.571751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.572373 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.072355904 +0000 UTC m=+159.522260685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.599687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.600024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.607184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.614254 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.622725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.631285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.672494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.672977 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.172940639 +0000 UTC m=+159.622845420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.717853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.747137 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.752064 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.772962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.774844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.774931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.775428 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.275389609 +0000 UTC m=+159.725294390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.779174 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.793868 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.807413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerStarted","Data":"7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.809197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.809905 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.848904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.865816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.867954 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.876844 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" event={"ID":"1d3f932b-fb41-4a2b-967b-a15de9606cbd","Type":"ContainerStarted","Data":"377dda43b2c98fed70c98b2ae4b706aba171eb66a0681a3802669479c5019605"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.877367 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.877662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878430 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.879881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.880302 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.880781 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.380721753 +0000 UTC m=+159.830626534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.881359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.882242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.882326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.885880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.892804 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.904637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-99dfz" event={"ID":"ea45a4de-3e71-4605-b02d-258b9dbb544c","Type":"ContainerStarted","Data":"f4db1b45ec2e457b7a2ff56b91950d8cd66199b63cc6ed9895ba28a908c491fb"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.904812 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.910139 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"3086ef3ded19987e359151417f1b56f20e76fe1a2c88e5862198ed710decfc2d"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.910198 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"2e8fc798a88b6d1d25186e12ba2db436e5433f254c2495c278f4ec1233609749"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.911862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" event={"ID":"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b","Type":"ContainerStarted","Data":"37d8ae718f41f6a0950faffa848c62cc06c3dfeac506e3c9b7008cd17392904b"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.911889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" event={"ID":"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b","Type":"ContainerStarted","Data":"529eefbe22c8dec19bde16436ca00af1e43b78c93a59cb01d933ed244b485005"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.916956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"14b233435448ffa1ee59174043b0d26cfc4e75dca3d643d36c17c59d83d7a105"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.923657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"b39a61d49e2f5a9e8994af8e26be433519a5e3071e6951a060ce0c7abd5b818f"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.929621 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930727 4727 generic.go:334] "Generic (PLEG): container finished" podID="7604b799-797e-4127-84cf-3f7e1c17dc87" containerID="237add48c3f106ae9133276b7ce2295893b915d817989efdff99ba5581e326dc" exitCode=0 Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerDied","Data":"237add48c3f106ae9133276b7ce2295893b915d817989efdff99ba5581e326dc"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerStarted","Data":"f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.931917 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"1e7988cfc3c9b4199125fca59cb133b0affc6fe32e3a90ef973bd39d5ee4a2bc"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.932792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5d9bz" event={"ID":"33b90f5a-a103-48d8-9eb1-fd7a153250ac","Type":"ContainerStarted","Data":"2933be720f5eafd11602af7494a86ab36b4f368c2d2223bb17bd5a9a8a9f19c1"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.980103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.984278 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.484258235 +0000 UTC m=+159.934163016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.002927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.082433 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.084099 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.584071408 +0000 UTC m=+160.033976189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.102601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.108795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.184687 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.185415 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.685395616 +0000 UTC m=+160.135300397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.287024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.288862 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.788835335 +0000 UTC m=+160.238740126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.389343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.389778 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.88973812 +0000 UTC m=+160.339642891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: W0109 10:48:34.395656 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc999b3d9_4231_4163_821a_b759599c6510.slice/crio-3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63 WatchSource:0}: Error finding container 3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63: Status 404 returned error can't find the container with id 3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63 Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.488310 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:34 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:34 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:34 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.488370 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.492173 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.492572 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.99255473 +0000 UTC m=+160.442459511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.515055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.556171 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" podStartSLOduration=140.55613565 podStartE2EDuration="2m20.55613565s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.556034596 +0000 UTC m=+160.005939397" watchObservedRunningTime="2026-01-09 10:48:34.55613565 +0000 UTC m=+160.006040451" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.607592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.607986 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.107967988 +0000 UTC m=+160.557872779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.709309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.709813 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.209784839 +0000 UTC m=+160.659689620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.713296 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" podStartSLOduration=140.713265481 podStartE2EDuration="2m20.713265481s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.682047122 +0000 UTC m=+160.131951923" watchObservedRunningTime="2026-01-09 10:48:34.713265481 +0000 UTC m=+160.163170282" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.815778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.816810 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.316777892 +0000 UTC m=+160.766682673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.821267 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" podStartSLOduration=140.821212721 podStartE2EDuration="2m20.821212721s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.76482358 +0000 UTC m=+160.214728371" watchObservedRunningTime="2026-01-09 10:48:34.821212721 +0000 UTC m=+160.271117502" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.822306 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" podStartSLOduration=141.822290982 podStartE2EDuration="2m21.822290982s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.794217086 +0000 UTC m=+160.244121867" watchObservedRunningTime="2026-01-09 10:48:34.822290982 +0000 UTC m=+160.272195763" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.833673 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" podStartSLOduration=140.833652962 podStartE2EDuration="2m20.833652962s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.832606712 +0000 UTC m=+160.282511483" watchObservedRunningTime="2026-01-09 10:48:34.833652962 +0000 UTC m=+160.283557743" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.929307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.929832 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.429808759 +0000 UTC m=+160.879713540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.976976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerStarted","Data":"929125b8b64331d2d6d391ab423a97e682d7d12d88e3ecc772238a6afa971136"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.023418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-99dfz" event={"ID":"ea45a4de-3e71-4605-b02d-258b9dbb544c","Type":"ContainerStarted","Data":"3ad3a1c7695129aa8d8ced3159c1d2b7d82ad6ef03c33d7264b30a28ff821909"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.032598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.033085 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.533067333 +0000 UTC m=+160.982972114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.083481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"dc4ac0a6c8b48eb3e132b9698adb597799e5c90cbf998db3a56ce970ecd14204"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.097204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"f2c6bbada562da92b79ea1b845bd220e7b8a1e2fe2876a76da7080e8dae09bcd"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.101488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.121350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5d9bz" event={"ID":"33b90f5a-a103-48d8-9eb1-fd7a153250ac","Type":"ContainerStarted","Data":"a194df4419f19dc760bce972698885958e3d8944f1106398ef9790de8436c302"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.121468 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.133951 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.134391 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.63435766 +0000 UTC m=+161.084262441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.134565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.142109 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.642082114 +0000 UTC m=+161.091986895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.144630 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.144673 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.227005 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.236362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.236876 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.736755928 +0000 UTC m=+161.186660700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.237930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.239543 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.739486348 +0000 UTC m=+161.189391129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.317291 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zcx2c" podStartSLOduration=141.3172642 podStartE2EDuration="2m21.3172642s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.313643595 +0000 UTC m=+160.763548396" watchObservedRunningTime="2026-01-09 10:48:35.3172642 +0000 UTC m=+160.767168981" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.339980 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.341134 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.841116093 +0000 UTC m=+161.291020874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.434889 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-99dfz" podStartSLOduration=5.4348613 podStartE2EDuration="5.4348613s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.430763831 +0000 UTC m=+160.880668612" watchObservedRunningTime="2026-01-09 10:48:35.4348613 +0000 UTC m=+160.884766081" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.442343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.442775 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.94276132 +0000 UTC m=+161.392666101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.472423 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5d9bz" podStartSLOduration=141.472398212 podStartE2EDuration="2m21.472398212s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.469119077 +0000 UTC m=+160.919023858" watchObservedRunningTime="2026-01-09 10:48:35.472398212 +0000 UTC m=+160.922302993" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.543811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.545728 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.043992995 +0000 UTC m=+161.493897776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.550708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.551612 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.051588006 +0000 UTC m=+161.501492787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.587725 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" podStartSLOduration=141.587698116 podStartE2EDuration="2m21.587698116s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.584007169 +0000 UTC m=+161.033911960" watchObservedRunningTime="2026-01-09 10:48:35.587698116 +0000 UTC m=+161.037602897" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.645240 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:35 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:35 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:35 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.645326 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.651866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.652082 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.152041488 +0000 UTC m=+161.601946259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.652254 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.652780 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.152770469 +0000 UTC m=+161.602675250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.752958 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.753526 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.253478629 +0000 UTC m=+161.703383410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.861558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.862785 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.362765218 +0000 UTC m=+161.812670009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.918833 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.929609 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.953648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967559 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.967785 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.467746951 +0000 UTC m=+161.917651732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.968521 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.468477672 +0000 UTC m=+161.918382453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.023259 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode375e91d_f60e_4b86_87ee_a043c2b81128.slice/crio-b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04 WatchSource:0}: Error finding container b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04: Status 404 returned error can't find the container with id b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.029827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.046655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.051692 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.059962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.060241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.072534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.073050 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.573025294 +0000 UTC m=+162.022930075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.087585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.090689 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.098490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.105122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.108823 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.110075 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.123087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"5a8e8755d9e7d9c0446931945fedc4c8e9e3bc443bf709901f0ac3a73068dc47"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.125466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" event={"ID":"e375e91d-f60e-4b86-87ee-a043c2b81128","Type":"ContainerStarted","Data":"b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.130716 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096c2622_3648_4579_8139_9d3a8d4a9006.slice/crio-73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee WatchSource:0}: Error finding container 73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee: Status 404 returned error can't find the container with id 73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.132892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerStarted","Data":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.134948 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2640d0ff_e8c2_4795_bf96_9b862e10de22.slice/crio-be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381 WatchSource:0}: Error finding container be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381: Status 404 returned error can't find the container with id be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.145306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"3190019309aae729aec535edfcc30635b29c8e3223896dc72f3bf1f5351dc951"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.159413 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77f8346_e5c5_4f5e_9ac5_71fc4018dd2f.slice/crio-21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1 WatchSource:0}: Error finding container 21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1: Status 404 returned error can't find the container with id 21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.160317 4727 generic.go:334] "Generic (PLEG): container finished" podID="198987e6-b5aa-4331-9e5e-4a51a02ab712" containerID="d18920e2a077c5cda46113e3cc3f62a4796c5e64a833e970353657224ca906d6" exitCode=0 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.160566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerDied","Data":"d18920e2a077c5cda46113e3cc3f62a4796c5e64a833e970353657224ca906d6"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.166465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"64c352984de1b1d53dfe07338f72589b6d7e501da4b28d4d63632f549e463612"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.171738 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pjc7c" podStartSLOduration=143.171711294 podStartE2EDuration="2m23.171711294s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.155937985 +0000 UTC m=+161.605842776" watchObservedRunningTime="2026-01-09 10:48:36.171711294 +0000 UTC m=+161.621616095" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.175036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.175153 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.176618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerStarted","Data":"ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4"} Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.189154 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.689127651 +0000 UTC m=+162.139032432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.189635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.191187 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.225759 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerStarted","Data":"cb8511618c1168f1b695c78cda0dcd1111aea86736fe3350e8e14bc57a092c35"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.274926 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" podStartSLOduration=142.274905106 podStartE2EDuration="2m22.274905106s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.186768292 +0000 UTC m=+161.636673093" watchObservedRunningTime="2026-01-09 10:48:36.274905106 +0000 UTC m=+161.724809887" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.279722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerStarted","Data":"4a7f5a18dbb009a7091c2259d98bfa96692fda4d16903837f198aa560dcf585e"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.281239 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.281588 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.781481927 +0000 UTC m=+162.231386708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.282002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.284501 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.784485605 +0000 UTC m=+162.234390576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.299035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"2d2e741862a7a5ade9e107ff6bffbc5b387df0cbea4a3a6ba65415f7abf29614"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.299116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"1b97fee42eafd23030e56f1a8dc68377690db45fdc4b7a19cbaa8f030ee72356"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.300839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" event={"ID":"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6","Type":"ContainerStarted","Data":"6e25a29358077a675b378ed578a122e4372977e0549b974e46808032fef13ad6"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.302450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerStarted","Data":"887701e00f73eb4322aa6d1e2bd519ba9d9e95d1edd0663c388315ca72c944aa"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.304787 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.304869 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.310398 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.311339 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" podStartSLOduration=142.311313845 podStartE2EDuration="2m22.311313845s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.308363779 +0000 UTC m=+161.758268580" watchObservedRunningTime="2026-01-09 10:48:36.311313845 +0000 UTC m=+161.761218646" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.382001 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" podStartSLOduration=143.381970871 podStartE2EDuration="2m23.381970871s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.351564606 +0000 UTC m=+161.801469397" watchObservedRunningTime="2026-01-09 10:48:36.381970871 +0000 UTC m=+161.831875652" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.382503 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.383151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.384829 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.884809353 +0000 UTC m=+162.334714134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.395628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.396934 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.400498 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.419464 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.425766 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.432480 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.436011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.437799 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.484832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.485239 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.985223793 +0000 UTC m=+162.435128574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.488956 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:36 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:36 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:36 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.489032 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.490973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.521083 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.586154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.586415 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.086356716 +0000 UTC m=+162.536261497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.586654 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.587141 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.087125358 +0000 UTC m=+162.537030139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.609980 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2ba90a_b9c8_4dbd_a1f5_324e3f12da9c.slice/crio-51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9 WatchSource:0}: Error finding container 51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9: Status 404 returned error can't find the container with id 51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9 Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.664746 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod423f9db2_b3a1_406d_b906_bc4ba37fdb55.slice/crio-95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee WatchSource:0}: Error finding container 95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee: Status 404 returned error can't find the container with id 95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.677867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.688785 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.689269 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.189247008 +0000 UTC m=+162.639151789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.791103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.791675 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.291646527 +0000 UTC m=+162.741551428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.893669 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.893965 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.393916862 +0000 UTC m=+162.843821653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.894051 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.894620 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.394599692 +0000 UTC m=+162.844504473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.995395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.995924 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.495893028 +0000 UTC m=+162.945797809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.000949 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.097301 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.098009 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.597981808 +0000 UTC m=+163.047886589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.199757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.200219 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.700193531 +0000 UTC m=+163.150098312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.301984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.302779 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.802764114 +0000 UTC m=+163.252668885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.395871 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.395935 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.404494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.405053 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.905030539 +0000 UTC m=+163.354935320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.410976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" event={"ID":"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f","Type":"ContainerStarted","Data":"801a771c7a0625bfc59b15b1cf0fc993257825d49ccd5fc9671333700c59dd02"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.411071 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" event={"ID":"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f","Type":"ContainerStarted","Data":"21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.414487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.416975 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.439469 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lkqbn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.439564 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" podUID="f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.456617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" event={"ID":"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e","Type":"ContainerStarted","Data":"b171c00f455526146e644db10192475f94aa9ea83ccb51d6fa6430e1a72f5e6b"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.456697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" event={"ID":"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e","Type":"ContainerStarted","Data":"11b5c1b051f4f1c9c843c0c72eef5f67ef0896cb1bcbf9f5d9c53648a697cde9"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.473049 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" event={"ID":"879d1222-addb-406a-b8fd-3ce4068c1d08","Type":"ContainerStarted","Data":"e71309d06338273bd0d538a59a5b81b2b5a63d25187d459b41c96fd68aad5695"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.493156 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:37 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:37 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:37 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.493254 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.506728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.508483 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.008468918 +0000 UTC m=+163.458373699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.530100 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"bd138eb3645b214256a3bd5769c05bfcec82a07c0ad7d8f5894397afb8cfeb73"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.530158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"0af779d575cb512d96688dbd2794c73058049e86f35a57dc18ef9d9fe97ea3d9"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.538470 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" podStartSLOduration=143.538444871 podStartE2EDuration="2m23.538444871s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.476159238 +0000 UTC m=+162.926064009" watchObservedRunningTime="2026-01-09 10:48:37.538444871 +0000 UTC m=+162.988349652" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.552453 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" event={"ID":"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6","Type":"ContainerStarted","Data":"f67f44b6b817ada0d7e7583dd6706384d4c9284cd900dbd8a8a91af861231be4"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.556140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" event={"ID":"e375e91d-f60e-4b86-87ee-a043c2b81128","Type":"ContainerStarted","Data":"bab5a31c30b737153a7f184cf59a19984c1e9c5ceb52342c8221105f7a4fceb1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.566091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"5f109339328ab84a5716df70bed4af6fada6f4467a56eaee2356f054dc120050"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.566145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"40000b60cc1b2bd6a0d5284af3ef9e33ee5ed205fb3182b76bbcee3681754dae"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.576310 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" podStartSLOduration=143.576289391 podStartE2EDuration="2m23.576289391s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.542418585 +0000 UTC m=+162.992323366" watchObservedRunningTime="2026-01-09 10:48:37.576289391 +0000 UTC m=+163.026194172" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.584791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"c50cce3f7a2384c4ffeb17558511922a4c2d4961f8e77846d3b83a8f8e029466"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.604907 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerStarted","Data":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.605794 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.608412 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.609855 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.109838017 +0000 UTC m=+163.559742798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.625420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" event={"ID":"be8a84bb-6eb3-4f11-8730-1bcb378cafa9","Type":"ContainerStarted","Data":"06d04df3dac6010b8701e6906839b91dc9bca91a2a8fd0afb2cc7f177e237e46"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.626275 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ldkw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.626337 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.643491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerStarted","Data":"bf7c09a3701b9efda131588870469c1b6268f38bdcea1980699756debdae5027"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.645582 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.648963 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" podStartSLOduration=143.648928253 podStartE2EDuration="2m23.648928253s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.620082915 +0000 UTC m=+163.069987716" watchObservedRunningTime="2026-01-09 10:48:37.648928253 +0000 UTC m=+163.098833034" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.664789 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.665181 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.666428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" event={"ID":"50dba57c-02ba-4204-a8d0-6f95ffed6db7","Type":"ContainerStarted","Data":"ab02b688fc95345574fda9a402f623919439933b5100b4c9a90d423bdd099e96"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.667867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.682095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" event={"ID":"8e3b3a7a-6c2e-4bb5-8768-be94244740aa","Type":"ContainerStarted","Data":"2490ae4a3583e255dcbfb1794cba7dee8f901c42e3866e961afdc986e93bdb4d"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.691016 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jtjg7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.691088 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" podUID="50dba57c-02ba-4204-a8d0-6f95ffed6db7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.697389 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podStartSLOduration=143.697367633 podStartE2EDuration="2m23.697367633s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.696191468 +0000 UTC m=+163.146096259" watchObservedRunningTime="2026-01-09 10:48:37.697367633 +0000 UTC m=+163.147272424" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.697675 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" podStartSLOduration=143.697669941 podStartE2EDuration="2m23.697669941s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.657802962 +0000 UTC m=+163.107707763" watchObservedRunningTime="2026-01-09 10:48:37.697669941 +0000 UTC m=+163.147574712" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.711737 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.716113 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.216082347 +0000 UTC m=+163.665987318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.745386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" event={"ID":"7e76cc6a-976f-4e61-8829-bbf3c4313293","Type":"ContainerStarted","Data":"6b3fd50c00f39b9584c418d0c77da39d12f67cee5675dbefcd5b5c3144112020"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.764978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" event={"ID":"2640d0ff-e8c2-4795-bf96-9b862e10de22","Type":"ContainerStarted","Data":"be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.780857 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podStartSLOduration=144.780836481 podStartE2EDuration="2m24.780836481s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.761220661 +0000 UTC m=+163.211125462" watchObservedRunningTime="2026-01-09 10:48:37.780836481 +0000 UTC m=+163.230741262" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.788849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"5e576fb8ee950301ba9a269bde7e48c0fdb07ccd317e16b1cf7c1c911c8712cc"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.818157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.819960 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.319938449 +0000 UTC m=+163.769843240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.825105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" event={"ID":"402cb251-6fda-417f-a9bf-30b59833a3cd","Type":"ContainerStarted","Data":"38d8ee1550f83a30b9189001e716189d87cfbff3cc78978ff319f10454e64e54"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.834033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" event={"ID":"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7","Type":"ContainerStarted","Data":"79d9ed59f4af60327a6223aa4d7908523fae3d2aacc537c101eff19740772d33"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.841203 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" podStartSLOduration=143.841184877 podStartE2EDuration="2m23.841184877s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.819292709 +0000 UTC m=+163.269197490" watchObservedRunningTime="2026-01-09 10:48:37.841184877 +0000 UTC m=+163.291089658" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.880111 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" podStartSLOduration=143.880088879 podStartE2EDuration="2m23.880088879s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.875794923 +0000 UTC m=+163.325699704" watchObservedRunningTime="2026-01-09 10:48:37.880088879 +0000 UTC m=+163.329993670" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.880961 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" podStartSLOduration=143.880956113 podStartE2EDuration="2m23.880956113s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.842414103 +0000 UTC m=+163.292318884" watchObservedRunningTime="2026-01-09 10:48:37.880956113 +0000 UTC m=+163.330860884" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.921448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.921808 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.421792181 +0000 UTC m=+163.871696962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.972847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"907b67f6ed0eb76717e264b7b5b4ee1c06cbe9e1598e02ceb280a758a65b41c1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.973024 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.985412 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" event={"ID":"423f9db2-b3a1-406d-b906-bc4ba37fdb55","Type":"ContainerStarted","Data":"95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.005066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerStarted","Data":"f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.022985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.024367 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.524352325 +0000 UTC m=+163.974257106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.039332 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" podStartSLOduration=144.039306179 podStartE2EDuration="2m24.039306179s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.037126926 +0000 UTC m=+163.487031717" watchObservedRunningTime="2026-01-09 10:48:38.039306179 +0000 UTC m=+163.489210960" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.046030 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"a28c063b1a8f11351ce12639f86cb865a33fed91f38ec293190f61afd87867de"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.057098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" event={"ID":"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c","Type":"ContainerStarted","Data":"51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.059386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.078745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"85eaa9bf4508b8c054caa41cb60845ace2283fd2b119bd2e72b11e7f4c533e00"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.091278 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" podStartSLOduration=144.09125214 podStartE2EDuration="2m24.09125214s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.08643635 +0000 UTC m=+163.536341131" watchObservedRunningTime="2026-01-09 10:48:38.09125214 +0000 UTC m=+163.541156921" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.113988 4727 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xs5vp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.114066 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" podUID="cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.124813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.129540 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.629494383 +0000 UTC m=+164.079399164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.130901 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerStarted","Data":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.131008 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.149796 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"96b22b2496db97d4d425031e52d6bce980c79a1600cdb97041a6c7cab8f9b132"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.161959 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlqcc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.162039 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.170177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podStartSLOduration=144.170154776 podStartE2EDuration="2m24.170154776s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.16892824 +0000 UTC m=+163.618833041" watchObservedRunningTime="2026-01-09 10:48:38.170154776 +0000 UTC m=+163.620059547" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.178502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvd7t" event={"ID":"8674271c-47a7-4722-9ceb-76e787b31485","Type":"ContainerStarted","Data":"945e9c3d527267f507f58dd0ce23f0c21ff89a8f15ec11b4ba5daf447cb9e23c"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.215054 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tvd7t" podStartSLOduration=8.215028361 podStartE2EDuration="8.215028361s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.2143291 +0000 UTC m=+163.664233881" watchObservedRunningTime="2026-01-09 10:48:38.215028361 +0000 UTC m=+163.664933142" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.226941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"3573280a8022ed2fdbb35102bc01caaff6fa5f9751d8bf241517a0363353173f"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.243696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.288400 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.298287 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.798228961 +0000 UTC m=+164.248133742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.354090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.356470 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.856456375 +0000 UTC m=+164.306361156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.462792 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.464040 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.964009824 +0000 UTC m=+164.413914605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.494041 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:38 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:38 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:38 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.494142 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.566319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.566838 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.066817934 +0000 UTC m=+164.516722715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.667869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.668297 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.168275555 +0000 UTC m=+164.618180336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.770844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.772523 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.272485876 +0000 UTC m=+164.722390647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.874410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.874697 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.374640028 +0000 UTC m=+164.824544809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.874764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.875124 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.375109611 +0000 UTC m=+164.825014392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.976108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.976401 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.476355497 +0000 UTC m=+164.926260288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.976627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.977469 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.477458798 +0000 UTC m=+164.927363579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.077950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.078350 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.578303632 +0000 UTC m=+165.028208423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.078652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.079352 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.579341532 +0000 UTC m=+165.029246313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.179862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.180018 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.67996969 +0000 UTC m=+165.129874481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.180781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.181294 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.681271857 +0000 UTC m=+165.131176638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.249618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" event={"ID":"be8a84bb-6eb3-4f11-8730-1bcb378cafa9","Type":"ContainerStarted","Data":"f17bc121e96a5a3a51ae16ccc0f9c9927126e182a8d2bea0f87316a012a17b7c"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.251670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"cdc86f2e00aa0249fe3898231615d899769b4e2722c517d0b80c2c9538b03224"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.257751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" event={"ID":"402cb251-6fda-417f-a9bf-30b59833a3cd","Type":"ContainerStarted","Data":"3391b00f1c4d60a4352a89f22e9c984d269778a0c6133f0b2fd79b74f9de3b2b"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.268491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" event={"ID":"879d1222-addb-406a-b8fd-3ce4068c1d08","Type":"ContainerStarted","Data":"e5370f5dbfb07ce0b3a3ebc6279659aaf4bb39fd9bf8468506ef4f4fe1facf2b"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.279432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" event={"ID":"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7","Type":"ContainerStarted","Data":"176c47728240b5f7d4ec21b50e8b6f426f91eb78be302d0da850597fc66d8984"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.282749 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.282878 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.782848472 +0000 UTC m=+165.232753253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.283139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.283561 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.783536832 +0000 UTC m=+165.233441633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.285933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"7991ba421ab1162f4e8eef03610dce33434fb4c8e56a6d2e93189a5a9aa0efff"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.308983 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" podStartSLOduration=145.308961122 podStartE2EDuration="2m25.308961122s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.308229011 +0000 UTC m=+164.758133812" watchObservedRunningTime="2026-01-09 10:48:39.308961122 +0000 UTC m=+164.758865903" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.310563 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" podStartSLOduration=145.310558088 podStartE2EDuration="2m25.310558088s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.275437076 +0000 UTC m=+164.725341877" watchObservedRunningTime="2026-01-09 10:48:39.310558088 +0000 UTC m=+164.760462869" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.310816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvd7t" event={"ID":"8674271c-47a7-4722-9ceb-76e787b31485","Type":"ContainerStarted","Data":"efd7c01970885d7d711b6bc3c7616038082862b6a1884f23a5727799be34b097"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.325539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" event={"ID":"8e3b3a7a-6c2e-4bb5-8768-be94244740aa","Type":"ContainerStarted","Data":"198ce0c48a97ae659a008bf4fb01528f1083d6bffa1f27c6c0a6668ac5b1db08"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.364705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" event={"ID":"423f9db2-b3a1-406d-b906-bc4ba37fdb55","Type":"ContainerStarted","Data":"4f02fcb34ab3f66ca98113730cb607d8fc22c1dab41b0c4cc758db422fb293f7"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.365116 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" podStartSLOduration=145.365101385 podStartE2EDuration="2m25.365101385s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.364385454 +0000 UTC m=+164.814290255" watchObservedRunningTime="2026-01-09 10:48:39.365101385 +0000 UTC m=+164.815006166" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"c66c664bbc6cebd5a8a70b99bfa16b183fb998286f49bbe05075cd690ee1810e"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380165 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"78bece42a06534eeb4055575f987efbfa8ec2a2e2516cd1eb6dc1a9e148e860f"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380704 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.387414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.387889 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.887854276 +0000 UTC m=+165.337759057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.388384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.389616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"9043ea11d4bfa015f4d078b898a2506b6281612e1fd9774d5784b87a44da26ce"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.389659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"a7a1fe934ad6e1b1852854a7194c0087f2e8bfe0dac5789c767dabc34b77ca70"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.390215 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.392663 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.892645016 +0000 UTC m=+165.342549797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.421933 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.422013 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.428255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" event={"ID":"7e76cc6a-976f-4e61-8829-bbf3c4313293","Type":"ContainerStarted","Data":"6ba9eb198b758659a2306f44a0c31794e4c21c53539dd0b9910c76bd53476ebd"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.440952 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" event={"ID":"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c","Type":"ContainerStarted","Data":"1a1b5e1caf0b625ea3f0dced0bdc083507159de55142ad65650b7b588346ae6f"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.469852 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" podStartSLOduration=145.469825241 podStartE2EDuration="2m25.469825241s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.468657997 +0000 UTC m=+164.918562808" watchObservedRunningTime="2026-01-09 10:48:39.469825241 +0000 UTC m=+164.919730022" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.473065 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" podStartSLOduration=145.473043074 podStartE2EDuration="2m25.473043074s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.40204036 +0000 UTC m=+164.851945151" watchObservedRunningTime="2026-01-09 10:48:39.473043074 +0000 UTC m=+164.922947865" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.481262 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.489552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.491553 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.991527662 +0000 UTC m=+165.441432453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.497423 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:39 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:39 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:39 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.497493 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.505768 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" podStartSLOduration=145.505749276 podStartE2EDuration="2m25.505749276s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.505040415 +0000 UTC m=+164.954945206" watchObservedRunningTime="2026-01-09 10:48:39.505749276 +0000 UTC m=+164.955654057" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.548942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"92e3c8d3498b69c691449ec52e4580576b50dca017312a82ed74a7a9b85c16a9"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.589709 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" podStartSLOduration=145.589686778 podStartE2EDuration="2m25.589686778s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.589231174 +0000 UTC m=+165.039135975" watchObservedRunningTime="2026-01-09 10:48:39.589686778 +0000 UTC m=+165.039591559" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.593950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerStarted","Data":"cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.594305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.594778 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.094760015 +0000 UTC m=+165.544664796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.636817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" event={"ID":"50dba57c-02ba-4204-a8d0-6f95ffed6db7","Type":"ContainerStarted","Data":"e915b1081555dcde799dbeb2baf0b20b0e26a619a62d5a6c225eeafa2db8312d"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.638831 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" podStartSLOduration=146.638805197 podStartE2EDuration="2m26.638805197s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.636217911 +0000 UTC m=+165.086122712" watchObservedRunningTime="2026-01-09 10:48:39.638805197 +0000 UTC m=+165.088709978" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.672676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.685947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"7044afa95d4a853088eee1ae0a900a7e0a082eff2c9323139d0cc56f3cd9c72c"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.705341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.707010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.206982029 +0000 UTC m=+165.656886810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.727707 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ppcsh" podStartSLOduration=10.727682862 podStartE2EDuration="10.727682862s" podCreationTimestamp="2026-01-09 10:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.722774989 +0000 UTC m=+165.172679770" watchObservedRunningTime="2026-01-09 10:48:39.727682862 +0000 UTC m=+165.177587653" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.732381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"8f10a5d1cb7fca9de7bef059a7a6f653e8861716ff14d18ad84dbc869ca8327e"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.762567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"1c3e07556aaa1ef418a783426fd229b444fd9dfb3f3bb091f13524103b97b3f1"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.763022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"35b96fa56688bb4a498cea3ba751f816b1c4710792ee5fb20818b2dd16dc557a"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.778339 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" podStartSLOduration=146.778320164 podStartE2EDuration="2m26.778320164s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.775489362 +0000 UTC m=+165.225394153" watchObservedRunningTime="2026-01-09 10:48:39.778320164 +0000 UTC m=+165.228224935" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.795574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"4f00542fa3797718e8f8f68230f017aad4b48904e67d41744a2635654f3af3d1"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.795635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"ce8043b00a697d39e6150d97af2be0105403da2dbda5615fd710655842784f38"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.809406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.830909 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" event={"ID":"2640d0ff-e8c2-4795-bf96-9b862e10de22","Type":"ContainerStarted","Data":"21731b9f0102bf163dcc63260bcdda7995c6a5398da2ebab35c8edb156e6f4b8"} Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.841205 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.341173033 +0000 UTC m=+165.791077814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.873807 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.885276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.893944 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.918135 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.918343 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.418323417 +0000 UTC m=+165.868228198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.921767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.924294 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.424278811 +0000 UTC m=+165.874183592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.942179 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" podStartSLOduration=146.94215485 podStartE2EDuration="2m26.94215485s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.877927042 +0000 UTC m=+165.327831853" watchObservedRunningTime="2026-01-09 10:48:39.94215485 +0000 UTC m=+165.392059631" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.964071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.012479 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" podStartSLOduration=146.012452235 podStartE2EDuration="2m26.012452235s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.011094976 +0000 UTC m=+165.460999767" watchObservedRunningTime="2026-01-09 10:48:40.012452235 +0000 UTC m=+165.462357026" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.022713 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.023543 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.523519057 +0000 UTC m=+165.973423838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.047376 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" podStartSLOduration=147.04734739 podStartE2EDuration="2m27.04734739s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.044014153 +0000 UTC m=+165.493918954" watchObservedRunningTime="2026-01-09 10:48:40.04734739 +0000 UTC m=+165.497252171" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.128252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.128686 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.628643645 +0000 UTC m=+166.078548426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.229261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.230952 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.73091428 +0000 UTC m=+166.180819061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.231320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.231811 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.731795375 +0000 UTC m=+166.181700156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.332546 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.332931 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.832907116 +0000 UTC m=+166.282811887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.434460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.434952 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.934934514 +0000 UTC m=+166.384839295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.443139 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.444301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.478444 4727 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.492929 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:40 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:40 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:40 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.493016 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.523055 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.535484 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.535864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.536002 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.536051 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.536224 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.03619002 +0000 UTC m=+166.486094801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.570726 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638582 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638639 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.639095 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.139079273 +0000 UTC m=+166.588984054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.640248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.640621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.659391 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" podStartSLOduration=146.659361573 podStartE2EDuration="2m26.659361573s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.601795488 +0000 UTC m=+166.051700269" watchObservedRunningTime="2026-01-09 10:48:40.659361573 +0000 UTC m=+166.109266354" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.683550 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.740190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.740595 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.240570645 +0000 UTC m=+166.690475426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.749627 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.750884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.764392 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.784808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842202 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.842864 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.34284891 +0000 UTC m=+166.792753691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.879692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"5e1b4ef8a5e34344096d1e1baf163e63534454b5b429e4cf7df8f6670cfb6c04"} Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.879766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"f38f0ba07c474f580b4e6f9ae3c73c71c9f7040d2572f9b245f9c3e90e1a2009"} Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.903213 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vhsj4" podStartSLOduration=146.903174075 podStartE2EDuration="2m26.903174075s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.899979151 +0000 UTC m=+166.349883932" watchObservedRunningTime="2026-01-09 10:48:40.903174075 +0000 UTC m=+166.353078856" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.943791 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.944082 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.444031223 +0000 UTC m=+166.893936014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.944163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.945477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.946074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.946194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.949597 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.950969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.952884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.954445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.958215 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.458190805 +0000 UTC m=+166.908095586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.967247 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.992904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.040800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.051728 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.551704695 +0000 UTC m=+167.001609476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.077890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154723 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.155181 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.655166155 +0000 UTC m=+167.105070936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.155802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.156030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.170209 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.186663 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.187398 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.199071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270219 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270552 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270600 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.270789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.770766337 +0000 UTC m=+167.220671108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.290286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.307303 4727 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-09T10:48:40.478486791Z","Handler":null,"Name":""} Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.346776 4727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.346823 4727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.382237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.382821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.390334 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.390389 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.428999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.496453 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:41 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:41 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:41 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.496572 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.544729 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.630104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.658016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.694952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.722192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.728964 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.050257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerStarted","Data":"fb23bdfd131c74ca699783debec87aba4e592b8f689b5331a1ea091df7d605ad"} Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.062585 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"ed649f8bba2b6a1a1bdd2f6edb8806bcb6cf173c0ad14cf474f40d6662ecc2fd"} Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.127875 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.483954 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.535567 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:42 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:42 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:42 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.535676 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.564996 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.566499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.582223 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.622805 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.651555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.652067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.652125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754407 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.755609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.755913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.768135 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.838416 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.861875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.924416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.936391 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.951278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.957541 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.968368 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.057497 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.058443 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.062752 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.068485 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.083012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133583 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133745 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133829 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.134290 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.134854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.163605 4727 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8lqcl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]log ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]etcd ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/max-in-flight-filter ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 09 10:48:43 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectcache ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startinformers ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 09 10:48:43 crc kubenswrapper[4727]: livez check failed Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.163691 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" podUID="198987e6-b5aa-4331-9e5e-4a51a02ab712" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.172376 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.172455 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.181498 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"4e7da0de585649169fd8cf1b1066a4fe59cfd2aac18387a51307fee26f57796c"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.184011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.184289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.191538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"a179ea666208967ecfd43822950b057cd35581408873a5090e17c2f3344f91f0"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.210480 4727 generic.go:334] "Generic (PLEG): container finished" podID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerID="f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.210617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerDied","Data":"f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.237163 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.237217 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.239212 4727 patch_prober.go:28] interesting pod/console-f9d7485db-pjc7c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.239256 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.247391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250201 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250391 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerStarted","Data":"fdad070e71d4bbce550062d735b7d4a59eda1ba60bd27a561289a761c73ac4de"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.259288 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.270907 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.271365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.287102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.287159 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.290363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.341589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.354689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"1430957bbab696cf47fc69d0b7a87908c92a1d373838b2994a4675be3c429e36"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.401150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.485914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.494766 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:43 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.494848 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.731419 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" podStartSLOduration=13.731391903 podStartE2EDuration="13.731391903s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:43.404185945 +0000 UTC m=+168.854090726" watchObservedRunningTime="2026-01-09 10:48:43.731391903 +0000 UTC m=+169.181296684" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.735283 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.991002 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.993074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.006077 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.017385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.242634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.243058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.283157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.322312 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.352474 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.356957 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.361990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.381006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.402425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerStarted","Data":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.402487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerStarted","Data":"ddbd37f0ce66367420bf898e597290bc9a838afaf3a3a6e5e804343b2dd74136"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.403318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.437749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439070 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerStarted","Data":"974cefab389bdd1c50fa8159159be952f608b390b753f134588ad26e90c6144f"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.457954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.458105 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.458164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.478530 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.478680 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.531389 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:44 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:44 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:44 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.533788 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.534467 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" podStartSLOduration=150.534451612 podStartE2EDuration="2m30.534451612s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:44.45666759 +0000 UTC m=+169.906572401" watchObservedRunningTime="2026-01-09 10:48:44.534451612 +0000 UTC m=+169.984356393" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.538597 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.539471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560492 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.566286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.566644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.621357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.720784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.187192 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.285216 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.286038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.286124 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.287087 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume" (OuterVolumeSpecName: "config-volume") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.300453 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.323529 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8" (OuterVolumeSpecName: "kube-api-access-gcsp8") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "kube-api-access-gcsp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.352367 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387491 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387559 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387573 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: W0109 10:48:45.407317 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e3f567_63b4_4a95_b9df_5ec10f0ec4f2.slice/crio-42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb WatchSource:0}: Error finding container 42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb: Status 404 returned error can't find the container with id 42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.448886 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:45 crc kubenswrapper[4727]: W0109 10:48:45.466777 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9e6995_13ec_46a4_a659_0acc617449d3.slice/crio-5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e WatchSource:0}: Error finding container 5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e: Status 404 returned error can't find the container with id 5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.487388 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:45 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:45 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:45 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.487443 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.573348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.576410 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584494 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99" exitCode=0 Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerStarted","Data":"301dab3d04bf736cfc1cfc161435219d3d49e05da644c5b2c0bdb5bb934e1806"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.593349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerStarted","Data":"451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.593395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerStarted","Data":"a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerDied","Data":"ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603252 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603265 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.633587 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.633561323 podStartE2EDuration="2.633561323s" podCreationTimestamp="2026-01-09 10:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:45.629721831 +0000 UTC m=+171.079626612" watchObservedRunningTime="2026-01-09 10:48:45.633561323 +0000 UTC m=+171.083466114" Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.512878 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:46 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:46 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:46 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.512972 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.631661 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" exitCode=0 Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.631922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24"} Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.645081 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" exitCode=0 Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.647093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222"} Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.502726 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:47 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:47 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:47 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.503308 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.666145 4727 generic.go:334] "Generic (PLEG): container finished" podID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerID="451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424" exitCode=0 Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.666204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerDied","Data":"451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424"} Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.154970 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.160859 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.488991 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:48 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:48 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:48 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.489087 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.906682 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.189426 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.237916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:49 crc kubenswrapper[4727]: E0109 10:48:49.238712 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238735 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: E0109 10:48:49.238756 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238765 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238899 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238920 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.239523 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.244119 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.247259 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.258387 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.317825 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"658c98ad-94ee-4294-a8b9-b2b041a83e37\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"658c98ad-94ee-4294-a8b9-b2b041a83e37\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "658c98ad-94ee-4294-a8b9-b2b041a83e37" (UID: "658c98ad-94ee-4294-a8b9-b2b041a83e37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318789 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318849 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.348114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "658c98ad-94ee-4294-a8b9-b2b041a83e37" (UID: "658c98ad-94ee-4294-a8b9-b2b041a83e37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419442 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419624 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.472145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.485773 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:49 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:49 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:49 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.485867 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.587674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716464 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerDied","Data":"a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd"} Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716544 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716637 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.346759 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:50 crc kubenswrapper[4727]: W0109 10:48:50.370070 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd8a2cf2b_2d26_4698_8fe0_17170dd1d102.slice/crio-9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de WatchSource:0}: Error finding container 9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de: Status 404 returned error can't find the container with id 9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.487183 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:50 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:50 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:50 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.487253 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.763283 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerStarted","Data":"9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de"} Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.209030 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.486339 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:51 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:51 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:51 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.486414 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.794550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerStarted","Data":"223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1"} Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.491601 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.495683 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.514246 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.514186929 podStartE2EDuration="3.514186929s" podCreationTimestamp="2026-01-09 10:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:51.836142626 +0000 UTC m=+177.286047407" watchObservedRunningTime="2026-01-09 10:48:52.514186929 +0000 UTC m=+177.964091710" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.838642 4727 generic.go:334] "Generic (PLEG): container finished" podID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerID="223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1" exitCode=0 Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.839745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerDied","Data":"223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1"} Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.133982 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.134053 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.140172 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.140272 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.232565 4727 patch_prober.go:28] interesting pod/console-f9d7485db-pjc7c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.232624 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.951971 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.952759 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" containerID="cri-o://7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" gracePeriod=30 Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.958287 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.958607 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" containerID="cri-o://cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" gracePeriod=30 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.000687 4727 generic.go:334] "Generic (PLEG): container finished" podID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerID="7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" exitCode=0 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.000794 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerDied","Data":"7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879"} Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.003816 4727 generic.go:334] "Generic (PLEG): container finished" podID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerID="cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" exitCode=0 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.003847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerDied","Data":"cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a"} Jan 09 10:49:01 crc kubenswrapper[4727]: I0109 10:49:01.735276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.230673 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.233923 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.269624 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:02 crc kubenswrapper[4727]: E0109 10:49:02.269988 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270005 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: E0109 10:49:02.270016 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270024 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270134 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270155 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.291721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334060 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8a2cf2b-2d26-4698-8fe0-17170dd1d102" (UID: "d8a2cf2b-2d26-4698-8fe0-17170dd1d102"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334410 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.335022 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.335320 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config" (OuterVolumeSpecName: "config") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.346882 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8a2cf2b-2d26-4698-8fe0-17170dd1d102" (UID: "d8a2cf2b-2d26-4698-8fe0-17170dd1d102"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.347117 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.347807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj" (OuterVolumeSpecName: "kube-api-access-dxxfj") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "kube-api-access-dxxfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435929 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435945 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435959 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.436034 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.436086 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537421 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.539854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.540390 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.547430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.558282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.589105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.031340 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.032756 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerDied","Data":"b91fc4ab06ef577d9c4e0fad8710798e885460e768b3d9d37cb5205f9fe286fa"} Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.032795 4727 scope.go:117] "RemoveContainer" containerID="7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034000 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerDied","Data":"9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de"} Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034026 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034069 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.060710 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.063804 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.145382 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.268774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.275714 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:49:04 crc kubenswrapper[4727]: I0109 10:49:04.868838 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" path="/var/lib/kubelet/pods/85ff3ef7-a005-4881-9004-73bc686b4aae/volumes" Jan 09 10:49:05 crc kubenswrapper[4727]: I0109 10:49:05.003668 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:49:05 crc kubenswrapper[4727]: I0109 10:49:05.003747 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:49:09 crc kubenswrapper[4727]: I0109 10:49:09.405392 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:49:09 crc kubenswrapper[4727]: I0109 10:49:09.406285 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:49:13 crc kubenswrapper[4727]: I0109 10:49:13.752579 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:49:15 crc kubenswrapper[4727]: I0109 10:49:15.003725 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" start-of-body= Jan 09 10:49:15 crc kubenswrapper[4727]: I0109 10:49:15.003785 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" Jan 09 10:49:19 crc kubenswrapper[4727]: I0109 10:49:19.003415 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.636460 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.637056 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shvml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d2hxb_openshift-marketplace(ee7a242f-7b69-4d13-bc60-f9c519d29024): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.638282 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" Jan 09 10:49:22 crc kubenswrapper[4727]: E0109 10:49:22.370297 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.003217 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.003286 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.234070 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.235060 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.239141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.239170 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.244399 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.409394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.409963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.510903 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.510961 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.511082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.537718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.604850 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.434899 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.435143 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjjlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlqjk_openshift-marketplace(847f9d70-de5c-4bc0-9823-c4074e353565): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.436347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.736597 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.806308 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.806559 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f74xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lj7dw_openshift-marketplace(f7741215-a775-4b93-9062-45e620560d49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.807785 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.865865 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914021 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.914472 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914558 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914750 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.915427 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.920137 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065088 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065184 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065620 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065644 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067691 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config" (OuterVolumeSpecName: "config") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.074962 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.082314 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk" (OuterVolumeSpecName: "kube-api-access-vpmsk") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "kube-api-access-vpmsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.167135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168596 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168651 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168664 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168675 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168684 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168693 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.170467 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.171298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.176728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.186976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.208123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerDied","Data":"bf7c09a3701b9efda131588870469c1b6268f38bdcea1980699756debdae5027"} Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.208153 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.235851 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.252145 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.257518 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.430096 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.431001 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.454468 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688682 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688923 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.708771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.762152 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.876543 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" path="/var/lib/kubelet/pods/b80bab42-ad32-4ec1-83c3-d939b007a97b/volumes" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.540328 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" Jan 09 10:49:32 crc kubenswrapper[4727]: I0109 10:49:32.568196 4727 scope.go:117] "RemoveContainer" containerID="cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.631157 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.631943 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk7rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dpfxv_openshift-marketplace(e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.633343 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.654848 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.655103 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmdl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qdwnw_openshift-marketplace(db9e6995-13ec-46a4-a659-0acc617449d3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.659075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.016497 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.026585 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8f187469_eca7_43d1_80a1_5b67f7aff838.slice/crio-ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4 WatchSource:0}: Error finding container ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4: Status 404 returned error can't find the container with id ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.133526 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.139877 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.155534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.189723 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151a8455_f0b6_44d2_a258_0d7a23683e88.slice/crio-eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8 WatchSource:0}: Error finding container eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8: Status 404 returned error can't find the container with id eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8 Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.192459 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064897bd_61aa_4547_8de9_14abed17dad2.slice/crio-8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9 WatchSource:0}: Error finding container 8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9: Status 404 returned error can't find the container with id 8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.253805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerStarted","Data":"eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.255258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerStarted","Data":"8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.256158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerStarted","Data":"ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.258177 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.258244 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.266425 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.266480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.269933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerStarted","Data":"cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.272957 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.273078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568"} Jan 09 10:49:33 crc kubenswrapper[4727]: E0109 10:49:33.274239 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" Jan 09 10:49:33 crc kubenswrapper[4727]: E0109 10:49:33.274462 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.285316 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerStarted","Data":"6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.289439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerStarted","Data":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.290705 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.293751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerStarted","Data":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.296746 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" containerID="cri-o://90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" gracePeriod=30 Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.296969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerStarted","Data":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.297220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.297239 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.300159 4727 generic.go:334] "Generic (PLEG): container finished" podID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerID="baf98b7c04c8a65d35f9b312da3e7cc77bcd1a0ca0d075f57a151a8fb7edda1a" exitCode=0 Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.300237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerDied","Data":"baf98b7c04c8a65d35f9b312da3e7cc77bcd1a0ca0d075f57a151a8fb7edda1a"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.303070 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerStarted","Data":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.305528 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.305634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerStarted","Data":"64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.344013 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.343978902 podStartE2EDuration="4.343978902s" podCreationTimestamp="2026-01-09 10:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.317075679 +0000 UTC m=+219.766980480" watchObservedRunningTime="2026-01-09 10:49:34.343978902 +0000 UTC m=+219.793883683" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.364631 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgnj5" podStartSLOduration=4.259943759 podStartE2EDuration="52.364608863s" podCreationTimestamp="2026-01-09 10:48:42 +0000 UTC" firstStartedPulling="2026-01-09 10:48:45.58703087 +0000 UTC m=+171.036935651" lastFinishedPulling="2026-01-09 10:49:33.691695974 +0000 UTC m=+219.141600755" observedRunningTime="2026-01-09 10:49:34.341751514 +0000 UTC m=+219.791656295" watchObservedRunningTime="2026-01-09 10:49:34.364608863 +0000 UTC m=+219.814513644" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.384570 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtgwm" podStartSLOduration=3.038206464 podStartE2EDuration="52.384552076s" podCreationTimestamp="2026-01-09 10:48:42 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.493933453 +0000 UTC m=+169.943838235" lastFinishedPulling="2026-01-09 10:49:33.840279076 +0000 UTC m=+219.290183847" observedRunningTime="2026-01-09 10:49:34.368212322 +0000 UTC m=+219.818117103" watchObservedRunningTime="2026-01-09 10:49:34.384552076 +0000 UTC m=+219.834456857" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.409233 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qzjvr" podStartSLOduration=3.918587048 podStartE2EDuration="54.409209269s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:43.280404305 +0000 UTC m=+168.730309086" lastFinishedPulling="2026-01-09 10:49:33.771026516 +0000 UTC m=+219.220931307" observedRunningTime="2026-01-09 10:49:34.408376084 +0000 UTC m=+219.858280875" watchObservedRunningTime="2026-01-09 10:49:34.409209269 +0000 UTC m=+219.859114050" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.432062 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podStartSLOduration=16.432039358 podStartE2EDuration="16.432039358s" podCreationTimestamp="2026-01-09 10:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.428244553 +0000 UTC m=+219.878149334" watchObservedRunningTime="2026-01-09 10:49:34.432039358 +0000 UTC m=+219.881944149" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.472325 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" podStartSLOduration=35.472294972 podStartE2EDuration="35.472294972s" podCreationTimestamp="2026-01-09 10:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.467990552 +0000 UTC m=+219.917895353" watchObservedRunningTime="2026-01-09 10:49:34.472294972 +0000 UTC m=+219.922199753" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.823902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.881483 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:34 crc kubenswrapper[4727]: E0109 10:49:34.882018 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.882080 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.882240 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.883151 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.884037 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.953696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config" (OuterVolumeSpecName: "config") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955157 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca" (OuterVolumeSpecName: "client-ca") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.962158 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6" (OuterVolumeSpecName: "kube-api-access-lr2t6") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "kube-api-access-lr2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.964444 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056086 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056297 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056312 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056323 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.157915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158037 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.159696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.159805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.166453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.179499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.209893 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335320 4727 generic.go:334] "Generic (PLEG): container finished" podID="064897bd-61aa-4547-8de9-14abed17dad2" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" exitCode=0 Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335436 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerDied","Data":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335908 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerDied","Data":"8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9"} Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335937 4727 scope.go:117] "RemoveContainer" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.376501 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.384080 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.400968 4727 scope.go:117] "RemoveContainer" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: E0109 10:49:35.401495 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": container with ID starting with 90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4 not found: ID does not exist" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.401568 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} err="failed to get container status \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": rpc error: code = NotFound desc = could not find container \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": container with ID starting with 90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4 not found: ID does not exist" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.669767 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.725778 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.768839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.769369 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.770660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" (UID: "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.777120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" (UID: "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.871867 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.871923 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerDied","Data":"cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346299 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346440 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.355157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerStarted","Data":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.355229 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerStarted","Data":"986f3af8c1633fddbf062a038327d6da7e29234701c7af0c999ffe1885a1ca72"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.357840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.434018 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.483161 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" podStartSLOduration=17.483135633 podStartE2EDuration="17.483135633s" podCreationTimestamp="2026-01-09 10:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:36.385851868 +0000 UTC m=+221.835756669" watchObservedRunningTime="2026-01-09 10:49:36.483135633 +0000 UTC m=+221.933040414" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.868289 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064897bd-61aa-4547-8de9-14abed17dad2" path="/var/lib/kubelet/pods/064897bd-61aa-4547-8de9-14abed17dad2/volumes" Jan 09 10:49:38 crc kubenswrapper[4727]: I0109 10:49:38.369139 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" exitCode=0 Jan 09 10:49:38 crc kubenswrapper[4727]: I0109 10:49:38.369245 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72"} Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405030 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405452 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405533 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.406273 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.406330 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" gracePeriod=600 Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.382801 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" exitCode=0 Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.382889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.785455 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.785552 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.875351 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.392315 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerStarted","Data":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.394962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.422230 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2hxb" podStartSLOduration=4.647978733 podStartE2EDuration="1m1.42220498s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:43.258803456 +0000 UTC m=+168.708708237" lastFinishedPulling="2026-01-09 10:49:40.033029713 +0000 UTC m=+225.482934484" observedRunningTime="2026-01-09 10:49:41.420217481 +0000 UTC m=+226.870122272" watchObservedRunningTime="2026-01-09 10:49:41.42220498 +0000 UTC m=+226.872109761" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.468546 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.516467 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.925670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.926153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.983755 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.401814 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.401887 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.445542 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.458346 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.524323 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.427381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b"} Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.491903 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.492192 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgnj5" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" containerID="cri-o://64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" gracePeriod=2 Jan 09 10:49:46 crc kubenswrapper[4727]: I0109 10:49:46.441918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b"} Jan 09 10:49:46 crc kubenswrapper[4727]: I0109 10:49:46.441926 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b" exitCode=0 Jan 09 10:49:47 crc kubenswrapper[4727]: I0109 10:49:47.454330 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" exitCode=0 Jan 09 10:49:47 crc kubenswrapper[4727]: I0109 10:49:47.454375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be"} Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.249628 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282291 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282424 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282539 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.283301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities" (OuterVolumeSpecName: "utilities") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.294745 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc" (OuterVolumeSpecName: "kube-api-access-k5hbc") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "kube-api-access-k5hbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.321272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391194 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391247 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391263 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"301dab3d04bf736cfc1cfc161435219d3d49e05da644c5b2c0bdb5bb934e1806"} Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462430 4727 scope.go:117] "RemoveContainer" containerID="64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462461 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.494327 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.497534 4727 scope.go:117] "RemoveContainer" containerID="365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.498650 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.530595 4727 scope.go:117] "RemoveContainer" containerID="22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.871396 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" path="/var/lib/kubelet/pods/52829665-e7e7-4322-a38e-731d67de0a1e/volumes" Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.499841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.503472 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.507502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.509979 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.584557 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlqjk" podStartSLOduration=4.301560359 podStartE2EDuration="1m9.58452158s" podCreationTimestamp="2026-01-09 10:48:41 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.538930722 +0000 UTC m=+169.988835493" lastFinishedPulling="2026-01-09 10:49:49.821891933 +0000 UTC m=+235.271796714" observedRunningTime="2026-01-09 10:49:50.580107287 +0000 UTC m=+236.030012088" watchObservedRunningTime="2026-01-09 10:49:50.58452158 +0000 UTC m=+236.034426371" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.080030 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.080109 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.121954 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.518744 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.518846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.522881 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.522972 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.532758 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.532878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.546112 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.546188 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.584063 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.543813 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.565864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.570854 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.594317 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qdwnw" podStartSLOduration=3.241966548 podStartE2EDuration="1m8.594286518s" podCreationTimestamp="2026-01-09 10:48:44 +0000 UTC" firstStartedPulling="2026-01-09 10:48:46.639869395 +0000 UTC m=+172.089774176" lastFinishedPulling="2026-01-09 10:49:51.992189365 +0000 UTC m=+237.442094146" observedRunningTime="2026-01-09 10:49:52.590222305 +0000 UTC m=+238.040127086" watchObservedRunningTime="2026-01-09 10:49:52.594286518 +0000 UTC m=+238.044191299" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.597725 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:52 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:52 crc kubenswrapper[4727]: > Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.644098 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dpfxv" podStartSLOduration=4.187093464 podStartE2EDuration="1m9.64407039s" podCreationTimestamp="2026-01-09 10:48:43 +0000 UTC" firstStartedPulling="2026-01-09 10:48:46.649319431 +0000 UTC m=+172.099224212" lastFinishedPulling="2026-01-09 10:49:52.106296357 +0000 UTC m=+237.556201138" observedRunningTime="2026-01-09 10:49:52.617324704 +0000 UTC m=+238.067229495" watchObservedRunningTime="2026-01-09 10:49:52.64407039 +0000 UTC m=+238.093975171" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.644723 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lj7dw" podStartSLOduration=5.311533839 podStartE2EDuration="1m12.644717069s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.631731232 +0000 UTC m=+170.081636013" lastFinishedPulling="2026-01-09 10:49:51.964914462 +0000 UTC m=+237.414819243" observedRunningTime="2026-01-09 10:49:52.640853153 +0000 UTC m=+238.090757934" watchObservedRunningTime="2026-01-09 10:49:52.644717069 +0000 UTC m=+238.094621850" Jan 09 10:49:53 crc kubenswrapper[4727]: I0109 10:49:53.896476 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:53 crc kubenswrapper[4727]: I0109 10:49:53.897274 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" containerID="cri-o://3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" gracePeriod=2 Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.438798 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.438891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.491159 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.591731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities" (OuterVolumeSpecName: "utilities") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.596867 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" exitCode=0 Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"fdad070e71d4bbce550062d735b7d4a59eda1ba60bd27a561289a761c73ac4de"} Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597372 4727 scope.go:117] "RemoveContainer" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597547 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.599387 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml" (OuterVolumeSpecName: "kube-api-access-shvml") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "kube-api-access-shvml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.631660 4727 scope.go:117] "RemoveContainer" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.649085 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.655308 4727 scope.go:117] "RemoveContainer" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.682956 4727 scope.go:117] "RemoveContainer" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.683713 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": container with ID starting with 3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28 not found: ID does not exist" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.683814 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} err="failed to get container status \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": rpc error: code = NotFound desc = could not find container \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": container with ID starting with 3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28 not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.683888 4727 scope.go:117] "RemoveContainer" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.684367 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": container with ID starting with 7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72 not found: ID does not exist" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.684424 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72"} err="failed to get container status \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": rpc error: code = NotFound desc = could not find container \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": container with ID starting with 7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72 not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.684445 4727 scope.go:117] "RemoveContainer" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.685718 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": container with ID starting with d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e not found: ID does not exist" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.685754 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e"} err="failed to get container status \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": rpc error: code = NotFound desc = could not find container \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": container with ID starting with d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690647 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690681 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690696 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.721596 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.721665 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.939898 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.945227 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:55 crc kubenswrapper[4727]: I0109 10:49:55.485617 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:55 crc kubenswrapper[4727]: > Jan 09 10:49:55 crc kubenswrapper[4727]: I0109 10:49:55.763880 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:55 crc kubenswrapper[4727]: > Jan 09 10:49:56 crc kubenswrapper[4727]: I0109 10:49:56.867424 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" path="/var/lib/kubelet/pods/ee7a242f-7b69-4d13-bc60-f9c519d29024/volumes" Jan 09 10:49:58 crc kubenswrapper[4727]: I0109 10:49:58.908734 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:58 crc kubenswrapper[4727]: I0109 10:49:58.909562 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" containerID="cri-o://b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" gracePeriod=30 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.015055 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.015406 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" containerID="cri-o://acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" gracePeriod=30 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.522478 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.529048 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582748 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582824 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582995 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583048 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca" (OuterVolumeSpecName: "client-ca") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config" (OuterVolumeSpecName: "config") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584738 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca" (OuterVolumeSpecName: "client-ca") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584944 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config" (OuterVolumeSpecName: "config") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.589363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j" (OuterVolumeSpecName: "kube-api-access-w8g5j") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "kube-api-access-w8g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.590139 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.590731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.591633 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl" (OuterVolumeSpecName: "kube-api-access-lrdbl") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "kube-api-access-lrdbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.630955 4727 generic.go:334] "Generic (PLEG): container finished" podID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" exitCode=0 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631054 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631047 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerDied","Data":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerDied","Data":"eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631190 4727 scope.go:117] "RemoveContainer" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632681 4727 generic.go:334] "Generic (PLEG): container finished" podID="dff45936-afc6-4df6-9cdd-f813330be05a" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" exitCode=0 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerDied","Data":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632761 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerDied","Data":"986f3af8c1633fddbf062a038327d6da7e29234701c7af0c999ffe1885a1ca72"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632804 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659043 4727 scope.go:117] "RemoveContainer" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: E0109 10:49:59.659466 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": container with ID starting with b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a not found: ID does not exist" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659516 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} err="failed to get container status \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": rpc error: code = NotFound desc = could not find container \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": container with ID starting with b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a not found: ID does not exist" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659547 4727 scope.go:117] "RemoveContainer" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.673080 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.675808 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.682291 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686371 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686412 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686425 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686475 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686498 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686527 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686538 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686618 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686627 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.688647 4727 scope.go:117] "RemoveContainer" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: E0109 10:49:59.689323 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": container with ID starting with acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca not found: ID does not exist" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.689387 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} err="failed to get container status \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": rpc error: code = NotFound desc = could not find container \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": container with ID starting with acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca not found: ID does not exist" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.690213 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.237550 4727 patch_prober.go:28] interesting pod/controller-manager-5686478bb9-z9rcn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.237685 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638181 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638559 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638579 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638592 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638600 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638615 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638623 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638637 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638645 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638659 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638667 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638676 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638684 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638699 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638706 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638716 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638723 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638734 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638742 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638879 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638896 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638907 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638917 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638931 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.639640 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642769 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642786 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642948 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.644292 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.644564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.646308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.647412 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651033 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651713 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651786 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.652702 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.652935 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.656013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.657554 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.659152 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700360 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700546 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700571 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802521 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802755 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.805498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.812441 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.812593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.821935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.823172 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.868296 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" path="/var/lib/kubelet/pods/151a8455-f0b6-44d2-a258-0d7a23683e88/volumes" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.869759 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" path="/var/lib/kubelet/pods/dff45936-afc6-4df6-9cdd-f813330be05a/volumes" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.962087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.971133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.175803 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.241457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:01 crc kubenswrapper[4727]: W0109 10:50:01.251557 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a81417_459b_4cd9_9be8_d04ac04682e3.slice/crio-66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72 WatchSource:0}: Error finding container 66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72: Status 404 returned error can't find the container with id 66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72 Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.292071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.292154 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.357324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.589416 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.631691 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.656371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerStarted","Data":"66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72"} Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.658309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerStarted","Data":"bb2f20dd9c688c9d9ca339c2135912218e93eba35cb1a8cb66863cd0423ab406"} Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.696609 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.664260 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerStarted","Data":"6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9"} Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.667140 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.669785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerStarted","Data":"ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0"} Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.669959 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.683167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.689314 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" podStartSLOduration=3.689281444 podStartE2EDuration="3.689281444s" podCreationTimestamp="2026-01-09 10:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:02.685308594 +0000 UTC m=+248.135213365" watchObservedRunningTime="2026-01-09 10:50:02.689281444 +0000 UTC m=+248.139186225" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.710829 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" podStartSLOduration=4.710801473 podStartE2EDuration="4.710801473s" podCreationTimestamp="2026-01-09 10:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:02.707807113 +0000 UTC m=+248.157711894" watchObservedRunningTime="2026-01-09 10:50:02.710801473 +0000 UTC m=+248.160706254" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.780430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:03 crc kubenswrapper[4727]: I0109 10:50:03.890659 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:03 crc kubenswrapper[4727]: I0109 10:50:03.891303 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" containerID="cri-o://0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" gracePeriod=2 Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.479472 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.523257 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.762104 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.810649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.294791 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.545600 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" containerID="cri-o://3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" gracePeriod=15 Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.692412 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" containerID="cri-o://a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" gracePeriod=2 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.532092 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619885 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620069 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620279 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620358 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620478 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620569 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.622880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623488 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.629788 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9" (OuterVolumeSpecName: "kube-api-access-phph9") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "kube-api-access-phph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.630183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.630888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631351 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631384 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631806 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.632984 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.633086 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.644567 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.644873 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645659 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645740 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-utilities" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645755 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-utilities" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645770 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645784 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645869 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-content" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645879 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-content" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.646430 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.646458 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.649488 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.667316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.700265 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.700367 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704468 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704501 4727 scope.go:117] "RemoveContainer" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704362 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710685 4727 generic.go:334] "Generic (PLEG): container finished" podID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerDied","Data":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerDied","Data":"887701e00f73eb4322aa6d1e2bd519ba9d9e95d1edd0663c388315ca72c944aa"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722348 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722569 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722621 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities" (OuterVolumeSpecName: "utilities") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723944 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724028 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724184 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724221 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724246 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724259 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724283 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724296 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724552 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724569 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724581 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724595 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724609 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724622 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724635 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724647 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724658 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.726459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4" (OuterVolumeSpecName: "kube-api-access-lmdl4") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "kube-api-access-lmdl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.781367 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.781641 4727 scope.go:117] "RemoveContainer" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.787162 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.803761 4727 scope.go:117] "RemoveContainer" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.820895 4727 scope.go:117] "RemoveContainer" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.821642 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": container with ID starting with a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43 not found: ID does not exist" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.821698 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} err="failed to get container status \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": rpc error: code = NotFound desc = could not find container \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": container with ID starting with a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43 not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.821742 4727 scope.go:117] "RemoveContainer" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.822080 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": container with ID starting with 2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b not found: ID does not exist" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822109 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} err="failed to get container status \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": rpc error: code = NotFound desc = could not find container \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": container with ID starting with 2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822129 4727 scope.go:117] "RemoveContainer" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.822371 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": container with ID starting with 4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24 not found: ID does not exist" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822398 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24"} err="failed to get container status \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": rpc error: code = NotFound desc = could not find container \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": container with ID starting with 4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24 not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822415 4727 scope.go:117] "RemoveContainer" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826364 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826485 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.828132 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.829207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.830385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.830957 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.831845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832159 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832553 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.834077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.834130 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.837040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.850571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.859179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.868382 4727 scope.go:117] "RemoveContainer" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.869991 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": container with ID starting with 3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab not found: ID does not exist" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.870068 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} err="failed to get container status \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": rpc error: code = NotFound desc = could not find container \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": container with ID starting with 3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.928966 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.040664 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.044193 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.083375 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.112837 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234305 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234457 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.235784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities" (OuterVolumeSpecName: "utilities") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.239536 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt" (OuterVolumeSpecName: "kube-api-access-cjjlt") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "kube-api-access-cjjlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.288460 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336472 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336548 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336561 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.510700 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:08 crc kubenswrapper[4727]: W0109 10:50:08.517247 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1140a4e4_44b9_4d5f_8232_cea144e8e050.slice/crio-29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429 WatchSource:0}: Error finding container 29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429: Status 404 returned error can't find the container with id 29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429 Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.722883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" event={"ID":"1140a4e4-44b9-4d5f-8232-cea144e8e050","Type":"ContainerStarted","Data":"29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429"} Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"4e7da0de585649169fd8cf1b1066a4fe59cfd2aac18387a51307fee26f57796c"} Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726814 4727 scope.go:117] "RemoveContainer" containerID="0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726903 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.763886 4727 scope.go:117] "RemoveContainer" containerID="020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.768719 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.771543 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.788196 4727 scope.go:117] "RemoveContainer" containerID="d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.869592 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" path="/var/lib/kubelet/pods/01aaae54-a546-4083-88ea-d3adc6a3ea7e/volumes" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.870398 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" path="/var/lib/kubelet/pods/847f9d70-de5c-4bc0-9823-c4074e353565/volumes" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.871178 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" path="/var/lib/kubelet/pods/db9e6995-13ec-46a4-a659-0acc617449d3/volumes" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.746281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" event={"ID":"1140a4e4-44b9-4d5f-8232-cea144e8e050","Type":"ContainerStarted","Data":"0d133ba109a82b9e848e9de28714152e51f5a7591f70efee78241e61a55d3f3d"} Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.746838 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.752220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.769689 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" podStartSLOduration=28.769668408 podStartE2EDuration="28.769668408s" podCreationTimestamp="2026-01-09 10:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:09.769541654 +0000 UTC m=+255.219446465" watchObservedRunningTime="2026-01-09 10:50:09.769668408 +0000 UTC m=+255.219573189" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.166445 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167127 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-utilities" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167145 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-utilities" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167167 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-content" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167177 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-content" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167188 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167331 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168390 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168669 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168710 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168877 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168905 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.169001 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.169099 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168969 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.169974 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170060 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170078 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170087 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170097 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170116 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170121 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170140 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170145 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170174 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170180 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170290 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170304 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170313 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170322 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170331 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170567 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.175450 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303885 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406192 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406463 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.765695 4727 generic.go:334] "Generic (PLEG): container finished" podID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerID="6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.765821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerDied","Data":"6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5"} Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.767077 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.770239 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.772494 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773646 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773683 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773694 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773714 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" exitCode=2 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773755 4727 scope.go:117] "RemoveContainer" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.354424 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27da5043f12d5307a70c72f97a3fa66058dee448a5dec7cd83b0aa63f5496935\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f05e1dfe1f6582ffaf0843b908ef08d6fd1a032539e2d8ce20fd84ee0c4ec783\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1665092989},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c98ee6f2d9b7993896c073e43217f838b4429acd29804b046840e375a35a8ec\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc6b4e2a8395d8afad2aa9b9632ecb98ce8dde7c73980fcf5b37cb5648d6b87f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203840338},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8dc6bf40bb85b3c070ac6ce1243b4d687fd575150299376d036af7b541798910\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e9bfaeae78e144645263e94c4eec4e342eeddbe95edd9b8e0ef6c87b7a507ba6\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201485666},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.355767 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.356362 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357048 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357263 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357288 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:50:12 crc kubenswrapper[4727]: I0109 10:50:12.782009 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.163333 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.164446 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.234710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.234779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235357 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235482 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock" (OuterVolumeSpecName: "var-lock") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235871 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235889 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.266784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.337395 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798522 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerDied","Data":"ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4"} Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798588 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798649 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.861425 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.865064 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.865865 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.866392 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.866645 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.944849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.944923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945028 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945647 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945671 4727 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945682 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.808151 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809393 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" exitCode=0 Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809470 4727 scope.go:117] "RemoveContainer" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809534 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.827919 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.828160 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.838842 4727 scope.go:117] "RemoveContainer" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.855212 4727 scope.go:117] "RemoveContainer" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.865730 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.866195 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.867719 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.873560 4727 scope.go:117] "RemoveContainer" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.888684 4727 scope.go:117] "RemoveContainer" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.907041 4727 scope.go:117] "RemoveContainer" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.926372 4727 scope.go:117] "RemoveContainer" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.927044 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": container with ID starting with b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d not found: ID does not exist" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927104 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d"} err="failed to get container status \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": rpc error: code = NotFound desc = could not find container \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": container with ID starting with b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927143 4727 scope.go:117] "RemoveContainer" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.927499 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": container with ID starting with b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c not found: ID does not exist" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927548 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c"} err="failed to get container status \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": rpc error: code = NotFound desc = could not find container \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": container with ID starting with b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927571 4727 scope.go:117] "RemoveContainer" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.929117 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": container with ID starting with e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3 not found: ID does not exist" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929156 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3"} err="failed to get container status \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": rpc error: code = NotFound desc = could not find container \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": container with ID starting with e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929189 4727 scope.go:117] "RemoveContainer" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.929520 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": container with ID starting with 6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7 not found: ID does not exist" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929544 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7"} err="failed to get container status \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": rpc error: code = NotFound desc = could not find container \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": container with ID starting with 6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929557 4727 scope.go:117] "RemoveContainer" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.930003 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": container with ID starting with f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664 not found: ID does not exist" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930081 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664"} err="failed to get container status \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": rpc error: code = NotFound desc = could not find container \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": container with ID starting with f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930140 4727 scope.go:117] "RemoveContainer" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.930815 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": container with ID starting with 3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03 not found: ID does not exist" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930845 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03"} err="failed to get container status \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": rpc error: code = NotFound desc = could not find container \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": container with ID starting with 3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03 not found: ID does not exist" Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.200202 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.200683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.227086 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18890a72a4bdcc92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:50:16.226081938 +0000 UTC m=+261.675986739,LastTimestamp:2026-01-09 10:50:16.226081938 +0000 UTC m=+261.675986739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.827247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383"} Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.827915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2a5c0d679da554c0b7a98b17eb420d17afe78530fd47a2e25f888e3a9b7ac285"} Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.828599 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.828706 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.613263 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.614201 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.614667 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615044 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615417 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: I0109 10:50:20.615448 4727 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615708 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.817335 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 09 10:50:21 crc kubenswrapper[4727]: E0109 10:50:21.219125 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.859615 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.860703 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.877756 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.877804 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:21 crc kubenswrapper[4727]: E0109 10:50:21.878366 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.878963 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.019932 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.395179 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27da5043f12d5307a70c72f97a3fa66058dee448a5dec7cd83b0aa63f5496935\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f05e1dfe1f6582ffaf0843b908ef08d6fd1a032539e2d8ce20fd84ee0c4ec783\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1665092989},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c98ee6f2d9b7993896c073e43217f838b4429acd29804b046840e375a35a8ec\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc6b4e2a8395d8afad2aa9b9632ecb98ce8dde7c73980fcf5b37cb5648d6b87f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203840338},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8dc6bf40bb85b3c070ac6ce1243b4d687fd575150299376d036af7b541798910\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e9bfaeae78e144645263e94c4eec4e342eeddbe95edd9b8e0ef6c87b7a507ba6\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201485666},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396245 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396721 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396983 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.397971 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.398003 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.864013 4727 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c" exitCode=0 Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c"} Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df82453108310ee5491d99c3ab8519fa8f143bc7fad3eba550937861443d094a"} Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871694 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871723 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.872308 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.872308 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.874910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24f1d805a60138ebcc997636f2d59d6b5125ce0d642d83120ec78da78a118c44"} Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.875409 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31c3d97a883790d3118598609d3d1c80e24721635b74f87a0aaf3c2799b56eec"} Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.875424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8497b91934e11eb2647c7aaedd35d8a58acf169c6beab01156c9c3f25639c5b"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.884037 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.885184 4727 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac" exitCode=1 Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.885282 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.886144 4727 scope.go:117] "RemoveContainer" containerID="54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.889831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"238acc9ca1251829789deed01dc932471fa121ee7097117f9c9e519b9afd2a4f"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.889890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e764b3186d3997459148ecbecee09819c81b8771a000f00f9e8da5a490c4a31"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890215 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890402 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890593 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:25 crc kubenswrapper[4727]: I0109 10:50:25.900822 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 10:50:25 crc kubenswrapper[4727]: I0109 10:50:25.901297 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b53e2517cb4e6619a42c59d7c74c55875f0794ee7a31605dc0f00bc81c72688e"} Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.422289 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.720115 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.724479 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.879841 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.879903 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.885649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.906409 4727 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.908163 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.948367 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.930894 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.930942 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.937058 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.944047 4727 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://c8497b91934e11eb2647c7aaedd35d8a58acf169c6beab01156c9c3f25639c5b" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.944095 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.935323 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.935370 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.940435 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:36 crc kubenswrapper[4727]: I0109 10:50:36.062028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 10:50:36 crc kubenswrapper[4727]: I0109 10:50:36.426710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.076355 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.495879 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.543888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.552751 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.841341 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.950161 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 10:50:38 crc kubenswrapper[4727]: I0109 10:50:38.120309 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 10:50:38 crc kubenswrapper[4727]: I0109 10:50:38.708246 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 10:50:39 crc kubenswrapper[4727]: I0109 10:50:39.137444 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:50:39 crc kubenswrapper[4727]: I0109 10:50:39.211496 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.623216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.720168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.872083 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.062036 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.130326 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.145964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.237703 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.378772 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.418229 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.620445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.882217 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.044534 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.069597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.070386 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.180050 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.238680 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.284468 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.316691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.317251 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.515440 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.781207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.958306 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.189171 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.281814 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.369896 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.450266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.774692 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.932922 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.995860 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.107432 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.208062 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.486965 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.577854 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.636142 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.636651 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.649631 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.687256 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.710608 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.737754 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.784221 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.837092 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.852180 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.937335 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.987801 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.022817 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.195945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.286714 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.557131 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.593302 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.617084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.624183 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.667937 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.700597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.968473 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.978469 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.988722 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.988805 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.993671 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.003993 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.010900 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.010877394 podStartE2EDuration="17.010877394s" podCreationTimestamp="2026-01-09 10:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:46.006662383 +0000 UTC m=+291.456567164" watchObservedRunningTime="2026-01-09 10:50:46.010877394 +0000 UTC m=+291.460782175" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.040161 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.080323 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.116825 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.248908 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.250646 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.313858 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.350175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.483370 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.483460 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.495411 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.678235 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.831046 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.043064 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.100808 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.150407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.184540 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.212321 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.329328 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.393026 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.422893 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.442668 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.555728 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.620838 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.680573 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.829621 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.065648 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.090287 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.211918 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.232101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.328890 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.406144 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.470095 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.503587 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.690886 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.782848 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.859763 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.859764 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.907799 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.007725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.012409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.022827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.048212 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.100350 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.183828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.187766 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.199124 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.255181 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.442650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.646431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.707404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.713410 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.767044 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.787628 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.833045 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.857780 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.877912 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.891218 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.961571 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.993098 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.003274 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.052757 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.069981 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.220303 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.272575 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.353867 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.354878 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.406927 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.408791 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.437540 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.541449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.565613 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.696801 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.987030 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.016351 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.422284 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.430089 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.444394 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.508205 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.530827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.548441 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.646000 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.650913 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.678654 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.694644 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.784028 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.861661 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.869145 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.875622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.878636 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.179541 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.202562 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.308084 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.308793 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" gracePeriod=5 Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.320317 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.356384 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.364558 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.490497 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.497307 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.518765 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.531318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.580354 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.593903 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.614407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.620198 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.639367 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.713893 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.741415 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.770025 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.845719 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.849277 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.883079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.992030 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.038672 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.068580 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.203869 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.288646 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.288886 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.312207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.384740 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.386174 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.390247 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.441315 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.442830 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.454345 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.594034 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.596222 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.640694 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.662675 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.678227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.679809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.700840 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.828072 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.835534 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.908449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.988138 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.090940 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.123413 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.127043 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.215269 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.262967 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.326614 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.528320 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.590331 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.682790 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.683802 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.719373 4727 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.721894 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.723100 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.851086 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.888115 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.965826 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.978194 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.010975 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.051650 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.227032 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.289439 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.408860 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.483478 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.492084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.501661 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.516278 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.580614 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.680754 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.723904 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.777262 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.777784 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.823186 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.896876 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.903685 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.131866 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.160301 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.209847 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.429778 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.585453 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.590373 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.629685 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.665583 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.698786 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.198176 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.485645 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.545269 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.656429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.723848 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.744052 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.890008 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.890112 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.031081 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034939 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034944 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035074 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035478 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035537 4727 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035551 4727 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035562 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.043204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.103943 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104198 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104363 4727 scope.go:117] "RemoveContainer" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104030 4727 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" exitCode=137 Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.133665 4727 scope.go:117] "RemoveContainer" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: E0109 10:50:58.135877 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": container with ID starting with 344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383 not found: ID does not exist" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.135913 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383"} err="failed to get container status \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": rpc error: code = NotFound desc = could not find container \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": container with ID starting with 344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383 not found: ID does not exist" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.136486 4727 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.411434 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.567104 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.868970 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.928231 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.928596 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" containerID="cri-o://6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" gracePeriod=30 Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.937020 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.937823 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" containerID="cri-o://ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" gracePeriod=30 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.111128 4727 generic.go:334] "Generic (PLEG): container finished" podID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerID="ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" exitCode=0 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.111704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerDied","Data":"ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0"} Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.113414 4727 generic.go:334] "Generic (PLEG): container finished" podID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerID="6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" exitCode=0 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.113476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerDied","Data":"6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9"} Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.376705 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.386642 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.559101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.560700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.560862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561153 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561391 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561519 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config" (OuterVolumeSpecName: "config") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562748 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562843 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562931 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.563019 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config" (OuterVolumeSpecName: "config") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.567094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv" (OuterVolumeSpecName: "kube-api-access-xjswv") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "kube-api-access-xjswv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.570804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.570916 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29" (OuterVolumeSpecName: "kube-api-access-z7m29") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "kube-api-access-z7m29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.571265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664686 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664733 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664748 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664762 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664771 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.122995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerDied","Data":"66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72"} Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.123038 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.123099 4727 scope.go:117] "RemoveContainer" containerID="6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.125538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerDied","Data":"bb2f20dd9c688c9d9ca339c2135912218e93eba35cb1a8cb66863cd0423ab406"} Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.125629 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.142043 4727 scope.go:117] "RemoveContainer" containerID="ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.156871 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.172287 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.172545 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.176830 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.181432 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751138 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751480 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751496 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751528 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751535 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751545 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751552 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751572 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751577 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751681 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751693 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751705 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751714 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.752360 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.754563 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.755660 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.755918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.756212 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.756817 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.758340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.759319 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.762166 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.762298 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.763648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.769138 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.769847 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.770773 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.770964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.771049 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.772170 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.783424 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.866957 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" path="/var/lib/kubelet/pods/47b307d6-5374-4c43-af7a-57c97019e1a4/volumes" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.867967 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" path="/var/lib/kubelet/pods/69a81417-459b-4cd9-9be8-d04ac04682e3/volumes" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878560 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878632 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878744 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878773 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878803 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878830 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.980883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.980964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981121 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981232 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985111 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.987608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.990395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.991450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.005996 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.006113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.088705 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.101341 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.561521 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.566352 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:01 crc kubenswrapper[4727]: W0109 10:51:01.577129 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6552dd_8901_46c7_afba_4a46dd4ee5fd.slice/crio-542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5 WatchSource:0}: Error finding container 542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5: Status 404 returned error can't find the container with id 542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5 Jan 09 10:51:02 crc kubenswrapper[4727]: I0109 10:51:02.142202 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerStarted","Data":"dfa5f71f305a4ffb2e2fc7b0bcee503ce3fe986d9840097185d2065a70651d33"} Jan 09 10:51:02 crc kubenswrapper[4727]: I0109 10:51:02.143416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" event={"ID":"bc6552dd-8901-46c7-afba-4a46dd4ee5fd","Type":"ContainerStarted","Data":"542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.151171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" event={"ID":"bc6552dd-8901-46c7-afba-4a46dd4ee5fd","Type":"ContainerStarted","Data":"d79f5347ba2719e69b0754febea43c1bfd1db4372db5dad46cd1d02d888d6133"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153074 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerStarted","Data":"c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.158863 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.158997 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.182100 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" podStartSLOduration=5.182075006 podStartE2EDuration="5.182075006s" podCreationTimestamp="2026-01-09 10:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:03.177226115 +0000 UTC m=+308.627130896" watchObservedRunningTime="2026-01-09 10:51:03.182075006 +0000 UTC m=+308.631979777" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.215981 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" podStartSLOduration=5.215958091 podStartE2EDuration="5.215958091s" podCreationTimestamp="2026-01-09 10:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:03.21592981 +0000 UTC m=+308.665834601" watchObservedRunningTime="2026-01-09 10:51:03.215958091 +0000 UTC m=+308.665862872" Jan 09 10:51:18 crc kubenswrapper[4727]: I0109 10:51:18.916786 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:18 crc kubenswrapper[4727]: I0109 10:51:18.919221 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" containerID="cri-o://c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" gracePeriod=30 Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.302934 4727 generic.go:334] "Generic (PLEG): container finished" podID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerID="c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" exitCode=0 Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.303357 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerDied","Data":"c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea"} Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.416706 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.590842 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.590901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591036 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config" (OuterVolumeSpecName: "config") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca" (OuterVolumeSpecName: "client-ca") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592697 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592728 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.602015 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.602704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8" (OuterVolumeSpecName: "kube-api-access-qvbr8") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "kube-api-access-qvbr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.694642 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.694700 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerDied","Data":"dfa5f71f305a4ffb2e2fc7b0bcee503ce3fe986d9840097185d2065a70651d33"} Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316466 4727 scope.go:117] "RemoveContainer" containerID="c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316526 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.355150 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.359567 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767046 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:20 crc kubenswrapper[4727]: E0109 10:51:20.767393 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767568 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.768079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.770659 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.777828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.778582 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.778604 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.779240 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.782550 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.782929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.870250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" path="/var/lib/kubelet/pods/d015075e-a19d-4f8e-b2fd-b303f8c3b230/volumes" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913948 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.914159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.016742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.017051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.018855 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.033420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.084306 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.510237 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerStarted","Data":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerStarted","Data":"ec2e727083cc949091b81c436ace0d73c2ccacd9f8280230033f02c043d2f2e4"} Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.336113 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.353638 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" podStartSLOduration=4.353618929 podStartE2EDuration="4.353618929s" podCreationTimestamp="2026-01-09 10:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:22.350144945 +0000 UTC m=+327.800049736" watchObservedRunningTime="2026-01-09 10:51:22.353618929 +0000 UTC m=+327.803523720" Jan 09 10:51:38 crc kubenswrapper[4727]: I0109 10:51:38.944228 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:38 crc kubenswrapper[4727]: I0109 10:51:38.945198 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" containerID="cri-o://52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" gracePeriod=30 Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.423566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.427920 4727 generic.go:334] "Generic (PLEG): container finished" podID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" exitCode=0 Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.427976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerDied","Data":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerDied","Data":"ec2e727083cc949091b81c436ace0d73c2ccacd9f8280230033f02c043d2f2e4"} Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428020 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428033 4727 scope.go:117] "RemoveContainer" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.453365 4727 scope.go:117] "RemoveContainer" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: E0109 10:51:39.454204 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": container with ID starting with 52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13 not found: ID does not exist" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.454259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} err="failed to get container status \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": rpc error: code = NotFound desc = could not find container \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": container with ID starting with 52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13 not found: ID does not exist" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.502928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.502993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.503084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.503125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.504748 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca" (OuterVolumeSpecName: "client-ca") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.504779 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config" (OuterVolumeSpecName: "config") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.510754 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.512477 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw" (OuterVolumeSpecName: "kube-api-access-k6wkw") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "kube-api-access-k6wkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604874 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604920 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604929 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604942 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.766069 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.770671 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.784646 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:40 crc kubenswrapper[4727]: E0109 10:51:40.785167 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785184 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785313 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790451 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790559 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790594 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790615 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790629 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790783 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.798874 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821571 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.822057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.867580 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" path="/var/lib/kubelet/pods/db952fad-8a21-4564-819a-9c6d0f3d7ae5/volumes" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923366 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.926897 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.927355 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.929587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.941928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:41 crc kubenswrapper[4727]: I0109 10:51:41.112047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:41 crc kubenswrapper[4727]: I0109 10:51:41.583404 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:41 crc kubenswrapper[4727]: W0109 10:51:41.587931 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75177484_179d_4ff5_9909_6989da323db6.slice/crio-7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1 WatchSource:0}: Error finding container 7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1: Status 404 returned error can't find the container with id 7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1 Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.448587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" event={"ID":"75177484-179d-4ff5-9909-6989da323db6","Type":"ContainerStarted","Data":"89e4f6b42e10d1b55856c208c316d8bff61a3decc7dc23370c01fcb0854f89b7"} Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.449129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" event={"ID":"75177484-179d-4ff5-9909-6989da323db6","Type":"ContainerStarted","Data":"7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1"} Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.449154 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.458596 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.475333 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" podStartSLOduration=4.475303251 podStartE2EDuration="4.475303251s" podCreationTimestamp="2026-01-09 10:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:42.470416889 +0000 UTC m=+347.920321670" watchObservedRunningTime="2026-01-09 10:51:42.475303251 +0000 UTC m=+347.925208032" Jan 09 10:52:09 crc kubenswrapper[4727]: I0109 10:52:09.405875 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:52:09 crc kubenswrapper[4727]: I0109 10:52:09.406294 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.540975 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.542608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.561425 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.684904 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685706 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.715190 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794449 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.802488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.802614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.815852 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.819124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.864544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.070099 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.676785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" event={"ID":"0d039e14-b430-43af-90d4-ebc9ba3bbc3c","Type":"ContainerStarted","Data":"75b91341a178854ccb5cd6309197ea7129ad47e7c925240919b4aff7c0ff816e"} Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.677354 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.677388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" event={"ID":"0d039e14-b430-43af-90d4-ebc9ba3bbc3c","Type":"ContainerStarted","Data":"b0f14861174f3cf90a27471c4fea3d6a92c164fb3d038a62a63be92f2262c624"} Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.703720 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" podStartSLOduration=1.703691888 podStartE2EDuration="1.703691888s" podCreationTimestamp="2026-01-09 10:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:52:14.701390276 +0000 UTC m=+380.151295057" watchObservedRunningTime="2026-01-09 10:52:14.703691888 +0000 UTC m=+380.153596689" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.746252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.747683 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qzjvr" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" containerID="cri-o://1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.756477 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.757155 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" containerID="cri-o://cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.779820 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.780185 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" containerID="cri-o://e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.791714 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.792013 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dtgwm" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" containerID="cri-o://d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.810974 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.811414 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" containerID="cri-o://9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.829073 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.829941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.850379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.030613 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.041849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.051504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.227464 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.242242 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.293828 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334541 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334775 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334955 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.335734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.341258 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.341666 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l" (OuterVolumeSpecName: "kube-api-access-q4c8l") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "kube-api-access-q4c8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.343123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk" (OuterVolumeSpecName: "kube-api-access-f74xk") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "kube-api-access-f74xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.358748 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities" (OuterVolumeSpecName: "utilities") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.378128 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.416222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.436863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.436979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437446 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437465 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437479 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437489 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437499 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437651 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.438699 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities" (OuterVolumeSpecName: "utilities") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.447980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz" (OuterVolumeSpecName: "kube-api-access-w2hhz") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "kube-api-access-w2hhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.508817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539626 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539670 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539680 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.703762 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717786 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"a179ea666208967ecfd43822950b057cd35581408873a5090e17c2f3344f91f0"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717940 4727 scope.go:117] "RemoveContainer" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.718106 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723436 4727 generic.go:334] "Generic (PLEG): container finished" podID="79d72458-cb87-481a-9697-4377383c296e" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerDied","Data":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723600 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerDied","Data":"cb8511618c1168f1b695c78cda0dcd1111aea86736fe3350e8e14bc57a092c35"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723713 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730700 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730865 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"fb23bdfd131c74ca699783debec87aba4e592b8f689b5331a1ea091df7d605ad"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746530 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"974cefab389bdd1c50fa8159159be952f608b390b753f134588ad26e90c6144f"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746690 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.753428 4727 scope.go:117] "RemoveContainer" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.754075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities" (OuterVolumeSpecName: "utilities") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.757220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd" (OuterVolumeSpecName: "kube-api-access-8p2pd") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "kube-api-access-8p2pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.804165 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.809815 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.820794 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.825862 4727 scope.go:117] "RemoveContainer" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.827084 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.850900 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.850960 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.859925 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.867099 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880276 4727 scope.go:117] "RemoveContainer" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.880867 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": container with ID starting with cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d not found: ID does not exist" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880932 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} err="failed to get container status \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": rpc error: code = NotFound desc = could not find container \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": container with ID starting with cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880977 4727 scope.go:117] "RemoveContainer" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.882662 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": container with ID starting with 53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2 not found: ID does not exist" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.883143 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} err="failed to get container status \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": rpc error: code = NotFound desc = could not find container \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": container with ID starting with 53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2 not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.883522 4727 scope.go:117] "RemoveContainer" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884384 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d72458-cb87-481a-9697-4377383c296e" path="/var/lib/kubelet/pods/79d72458-cb87-481a-9697-4377383c296e/volumes" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.884415 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": container with ID starting with 394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29 not found: ID does not exist" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884568 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29"} err="failed to get container status \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": rpc error: code = NotFound desc = could not find container \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": container with ID starting with 394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29 not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884600 4727 scope.go:117] "RemoveContainer" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.886036 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7741215-a775-4b93-9062-45e620560d49" path="/var/lib/kubelet/pods/f7741215-a775-4b93-9062-45e620560d49/volumes" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.889818 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.889858 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.953474 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.065900 4727 scope.go:117] "RemoveContainer" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.067616 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": container with ID starting with e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00 not found: ID does not exist" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.067654 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} err="failed to get container status \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": rpc error: code = NotFound desc = could not find container \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": container with ID starting with e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.067708 4727 scope.go:117] "RemoveContainer" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.101944 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.101994 4727 scope.go:117] "RemoveContainer" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.105651 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.126865 4727 scope.go:117] "RemoveContainer" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.148085 4727 scope.go:117] "RemoveContainer" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.149368 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": container with ID starting with 1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f not found: ID does not exist" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.149416 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} err="failed to get container status \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": rpc error: code = NotFound desc = could not find container \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": container with ID starting with 1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.149455 4727 scope.go:117] "RemoveContainer" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.151906 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": container with ID starting with 7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee not found: ID does not exist" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.151927 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee"} err="failed to get container status \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": rpc error: code = NotFound desc = could not find container \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": container with ID starting with 7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.151945 4727 scope.go:117] "RemoveContainer" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.152235 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": container with ID starting with aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188 not found: ID does not exist" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.152253 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188"} err="failed to get container status \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": rpc error: code = NotFound desc = could not find container \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": container with ID starting with aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.152266 4727 scope.go:117] "RemoveContainer" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.174887 4727 scope.go:117] "RemoveContainer" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.191235 4727 scope.go:117] "RemoveContainer" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221093 4727 scope.go:117] "RemoveContainer" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.221733 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": container with ID starting with d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1 not found: ID does not exist" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221772 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} err="failed to get container status \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": rpc error: code = NotFound desc = could not find container \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": container with ID starting with d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221826 4727 scope.go:117] "RemoveContainer" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.222700 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": container with ID starting with abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568 not found: ID does not exist" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.222723 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568"} err="failed to get container status \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": rpc error: code = NotFound desc = could not find container \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": container with ID starting with abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.222762 4727 scope.go:117] "RemoveContainer" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.223137 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": container with ID starting with 55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729 not found: ID does not exist" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.223182 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729"} err="failed to get container status \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": rpc error: code = NotFound desc = could not find container \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": container with ID starting with 55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.223550 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258805 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.261491 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities" (OuterVolumeSpecName: "utilities") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.268730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr" (OuterVolumeSpecName: "kube-api-access-vk7rr") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "kube-api-access-vk7rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.361209 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.361272 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.378036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.463232 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757585 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" event={"ID":"82b1f92b-6077-4b4c-876a-3d732a78b2cc","Type":"ContainerStarted","Data":"c4120e6e0b13a12e3c80c4f82c20a071169cc3f87d8d7559288902d5a4135b48"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757894 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" event={"ID":"82b1f92b-6077-4b4c-876a-3d732a78b2cc","Type":"ContainerStarted","Data":"435de44b53e8fad8ef60cf2f001292fc2d53d0bb8b2e47ba5b9c8335f2a7f892"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761162 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" exitCode=0 Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761313 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761341 4727 scope.go:117] "RemoveContainer" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761456 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761540 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.783459 4727 scope.go:117] "RemoveContainer" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.809346 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" podStartSLOduration=2.809317862 podStartE2EDuration="2.809317862s" podCreationTimestamp="2026-01-09 10:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:52:21.783765789 +0000 UTC m=+387.233670600" watchObservedRunningTime="2026-01-09 10:52:21.809317862 +0000 UTC m=+387.259222643" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.828997 4727 scope.go:117] "RemoveContainer" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.829896 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.835888 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.850297 4727 scope.go:117] "RemoveContainer" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.850940 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": container with ID starting with 9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1 not found: ID does not exist" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.850980 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} err="failed to get container status \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": rpc error: code = NotFound desc = could not find container \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": container with ID starting with 9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851013 4727 scope.go:117] "RemoveContainer" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.851547 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": container with ID starting with f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc not found: ID does not exist" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851602 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} err="failed to get container status \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": rpc error: code = NotFound desc = could not find container \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": container with ID starting with f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851644 4727 scope.go:117] "RemoveContainer" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.852143 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": container with ID starting with d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222 not found: ID does not exist" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.852179 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222"} err="failed to get container status \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": rpc error: code = NotFound desc = could not find container \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": container with ID starting with d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.983731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984532 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984550 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984569 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984578 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984588 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984596 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984606 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984613 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984623 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984630 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984641 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984647 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984655 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984662 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984671 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984678 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984686 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984693 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984700 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984707 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984720 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984726 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984737 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984744 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984756 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984764 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985040 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985052 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985088 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985101 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985109 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.986036 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.989891 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.002210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.072886 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.073012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.073053 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.175490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.175804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.196924 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.322048 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.728938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.772248 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerStarted","Data":"e7cbcdd9132010adbd3a90684a6068ca43c2e53c5ce10e98fcacef1f67a85ff4"} Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.878739 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" path="/var/lib/kubelet/pods/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365/volumes" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.880118 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" path="/var/lib/kubelet/pods/b713ecb8-60e3-40f5-b7fa-5cf818b59b99/volumes" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.880934 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" path="/var/lib/kubelet/pods/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2/volumes" Jan 09 10:52:23 crc kubenswrapper[4727]: I0109 10:52:23.781645 4727 generic.go:334] "Generic (PLEG): container finished" podID="9334dd96-d38c-460b-a258-2bccfc2960d5" containerID="dabd27b1cda459657bbd8b387e2be5d4a0ae97b340939ce3f9eaac4a28219f78" exitCode=0 Jan 09 10:52:23 crc kubenswrapper[4727]: I0109 10:52:23.781726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerDied","Data":"dabd27b1cda459657bbd8b387e2be5d4a0ae97b340939ce3f9eaac4a28219f78"} Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.174315 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.180022 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.183034 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.185246 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.202947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.203010 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.203069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303969 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.304650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.304667 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.331279 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.371366 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.375586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.378203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.392122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405395 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405723 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.498283 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506457 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.507052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.507140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.525308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.705159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.763361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.788839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"930189ee498333983e08c7ab2e58382299db3fb83cb58d6430015969c8cef074"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.175955 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.798332 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.798427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.799070 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerStarted","Data":"357891722b37e84c5d6696b58f957606ce91311ffc64133377aa8cf62644c51c"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.801770 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.801840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.808042 4727 generic.go:334] "Generic (PLEG): container finished" podID="9334dd96-d38c-460b-a258-2bccfc2960d5" containerID="6e31712e99875535052645daef8f13cd0833da2c8d963f1f7fb3897ca6598ed6" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.808109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerDied","Data":"6e31712e99875535052645daef8f13cd0833da2c8d963f1f7fb3897ca6598ed6"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.573256 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.575067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.576770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.582153 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748183 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.821145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.824268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerStarted","Data":"72a56b8f8e7b4aa8f070f9ebf9f13419328b07f260f79ff05dcfcd2718ec1dc1"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.849888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850422 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.873630 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.889274 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vc94w" podStartSLOduration=3.435071827 podStartE2EDuration="5.889253763s" podCreationTimestamp="2026-01-09 10:52:21 +0000 UTC" firstStartedPulling="2026-01-09 10:52:23.785610295 +0000 UTC m=+389.235515076" lastFinishedPulling="2026-01-09 10:52:26.239792231 +0000 UTC m=+391.689697012" observedRunningTime="2026-01-09 10:52:26.873072254 +0000 UTC m=+392.322977035" watchObservedRunningTime="2026-01-09 10:52:26.889253763 +0000 UTC m=+392.339158534" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.932958 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.416891 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.833743 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.833858 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.836778 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.836861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.838841 4727 generic.go:334] "Generic (PLEG): container finished" podID="86044c1d-9cd9-49f7-b906-011e3856e591" containerID="b64adef4a01330eaf4950f8914c442088b90b7a65a9374c0b9cb3c76b61ac8e6" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.839583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerDied","Data":"b64adef4a01330eaf4950f8914c442088b90b7a65a9374c0b9cb3c76b61ac8e6"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.839623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"8fd05d089069d89520a9575e3132cdb8e9cc906016887f4415e0f8747d353211"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.846431 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerStarted","Data":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.849827 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.851386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.877457 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9rsdw" podStartSLOduration=2.369992656 podStartE2EDuration="4.877437818s" podCreationTimestamp="2026-01-09 10:52:24 +0000 UTC" firstStartedPulling="2026-01-09 10:52:25.802054028 +0000 UTC m=+391.251958809" lastFinishedPulling="2026-01-09 10:52:28.30949919 +0000 UTC m=+393.759403971" observedRunningTime="2026-01-09 10:52:28.87232358 +0000 UTC m=+394.322228361" watchObservedRunningTime="2026-01-09 10:52:28.877437818 +0000 UTC m=+394.327342599" Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.899365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-962zg" podStartSLOduration=2.316981627 podStartE2EDuration="4.899345713s" podCreationTimestamp="2026-01-09 10:52:24 +0000 UTC" firstStartedPulling="2026-01-09 10:52:25.803643161 +0000 UTC m=+391.253547962" lastFinishedPulling="2026-01-09 10:52:28.386007267 +0000 UTC m=+393.835912048" observedRunningTime="2026-01-09 10:52:28.895421707 +0000 UTC m=+394.345326488" watchObservedRunningTime="2026-01-09 10:52:28.899345713 +0000 UTC m=+394.349250494" Jan 09 10:52:29 crc kubenswrapper[4727]: I0109 10:52:29.861459 4727 generic.go:334] "Generic (PLEG): container finished" podID="86044c1d-9cd9-49f7-b906-011e3856e591" containerID="110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6" exitCode=0 Jan 09 10:52:29 crc kubenswrapper[4727]: I0109 10:52:29.862236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerDied","Data":"110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6"} Jan 09 10:52:31 crc kubenswrapper[4727]: I0109 10:52:31.874630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"7edded77ffe5b19e0a3f9ce3746e48b3a0700239fe057b835c136da80809e5eb"} Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.323264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.323332 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.370821 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.390984 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gdvvw" podStartSLOduration=3.421601892 podStartE2EDuration="6.390950724s" podCreationTimestamp="2026-01-09 10:52:26 +0000 UTC" firstStartedPulling="2026-01-09 10:52:27.841718421 +0000 UTC m=+393.291623202" lastFinishedPulling="2026-01-09 10:52:30.811067253 +0000 UTC m=+396.260972034" observedRunningTime="2026-01-09 10:52:31.895983026 +0000 UTC m=+397.345887807" watchObservedRunningTime="2026-01-09 10:52:32.390950724 +0000 UTC m=+397.840855505" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.927664 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:33 crc kubenswrapper[4727]: I0109 10:52:33.879932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:33 crc kubenswrapper[4727]: I0109 10:52:33.959972 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.499602 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.500112 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.546675 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.706783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.706874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.747437 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.932748 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.941397 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:36 crc kubenswrapper[4727]: I0109 10:52:36.934145 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:36 crc kubenswrapper[4727]: I0109 10:52:36.934568 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:37 crc kubenswrapper[4727]: I0109 10:52:37.984426 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gdvvw" podUID="86044c1d-9cd9-49f7-b906-011e3856e591" containerName="registry-server" probeResult="failure" output=< Jan 09 10:52:37 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:52:37 crc kubenswrapper[4727]: > Jan 09 10:52:39 crc kubenswrapper[4727]: I0109 10:52:39.406328 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:52:39 crc kubenswrapper[4727]: I0109 10:52:39.406405 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:52:46 crc kubenswrapper[4727]: I0109 10:52:46.990307 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:47 crc kubenswrapper[4727]: I0109 10:52:47.046285 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:51 crc kubenswrapper[4727]: I0109 10:52:51.056375 4727 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podb4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podb4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365] : Timed out while waiting for systemd to remove kubepods-burstable-podb4cf56cb_1bd2_4ba2_84d4_8ad0b7fdd365.slice" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.010123 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" containerID="cri-o://fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" gracePeriod=30 Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.439973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.567971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568036 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568135 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568456 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.569674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.569902 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580068 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580072 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580570 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq" (OuterVolumeSpecName: "kube-api-access-6f5nq") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "kube-api-access-6f5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.589377 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.595279 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.596180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670437 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670552 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670576 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670598 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670619 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670638 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670657 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044113 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" exitCode=0 Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerDied","Data":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerDied","Data":"ddbd37f0ce66367420bf898e597290bc9a838afaf3a3a6e5e804343b2dd74136"} Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044241 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044269 4727 scope.go:117] "RemoveContainer" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.066651 4727 scope.go:117] "RemoveContainer" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: E0109 10:53:00.067554 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": container with ID starting with fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713 not found: ID does not exist" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.067634 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} err="failed to get container status \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": rpc error: code = NotFound desc = could not find container \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": container with ID starting with fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713 not found: ID does not exist" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.090537 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.116925 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.869910 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" path="/var/lib/kubelet/pods/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5/volumes" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.404630 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.406910 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.407134 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.408427 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.408861 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" gracePeriod=600 Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109227 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" exitCode=0 Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109593 4727 scope.go:117] "RemoveContainer" containerID="21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" Jan 09 10:55:09 crc kubenswrapper[4727]: I0109 10:55:09.405164 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:55:09 crc kubenswrapper[4727]: I0109 10:55:09.406062 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:55:39 crc kubenswrapper[4727]: I0109 10:55:39.405248 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:55:39 crc kubenswrapper[4727]: I0109 10:55:39.405950 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.404856 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.405816 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.405899 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.406988 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.407083 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" gracePeriod=600 Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.417430 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" exitCode=0 Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.417527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.418036 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.418071 4727 scope.go:117] "RemoveContainer" containerID="26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.448533 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:36 crc kubenswrapper[4727]: E0109 10:57:36.449636 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.449658 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.449809 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.450445 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.452593 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x5t9g" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.454874 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.456694 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.458044 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.461128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5n4rr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.461726 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.474704 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.497874 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.499210 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.502133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.502260 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l6krn" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.505855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700328 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.770867 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.781366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.824387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.047636 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.077296 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.090414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.116596 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:37 crc kubenswrapper[4727]: W0109 10:57:37.125693 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cee0bf6_27dd_4944_bbef_574afbae1542.slice/crio-32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2 WatchSource:0}: Error finding container 32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2: Status 404 returned error can't find the container with id 32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2 Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.953968 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" event={"ID":"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9","Type":"ContainerStarted","Data":"b33dd20b656d4c4d4580edb24506b53bd4d87e60bb7a09a01147e783e7f3db2b"} Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.955878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2qqks" event={"ID":"2715d39f-d488-448b-b6f2-ff592dea195a","Type":"ContainerStarted","Data":"ed0394e70c72e641dbd8d58ae215deffd337bc69141cab91e59ef79b091fd78e"} Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.957305 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" event={"ID":"5cee0bf6-27dd-4944-bbef-574afbae1542","Type":"ContainerStarted","Data":"32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.984864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" event={"ID":"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9","Type":"ContainerStarted","Data":"79d3135513c5bf28f04e5b1a7fda1a1222d9801038ffc7ff9944bfde65affb44"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.986534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2qqks" event={"ID":"2715d39f-d488-448b-b6f2-ff592dea195a","Type":"ContainerStarted","Data":"5fc901d294e1e40692b0da336ff9523be5b9030e6f2604f82b82e99de4c0afa6"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.987970 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" event={"ID":"5cee0bf6-27dd-4944-bbef-574afbae1542","Type":"ContainerStarted","Data":"2a9635efe863cde95623b36a60cf0275ad8292f2790f7447e5219732210f774d"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.988132 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.005194 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" podStartSLOduration=1.901223648 podStartE2EDuration="6.005168531s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.101028384 +0000 UTC m=+702.550933165" lastFinishedPulling="2026-01-09 10:57:41.204973267 +0000 UTC m=+706.654878048" observedRunningTime="2026-01-09 10:57:42.00259172 +0000 UTC m=+707.452496501" watchObservedRunningTime="2026-01-09 10:57:42.005168531 +0000 UTC m=+707.455073312" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.029291 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2qqks" podStartSLOduration=1.898238601 podStartE2EDuration="6.029268136s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.077004613 +0000 UTC m=+702.526909394" lastFinishedPulling="2026-01-09 10:57:41.208034148 +0000 UTC m=+706.657938929" observedRunningTime="2026-01-09 10:57:42.025973301 +0000 UTC m=+707.475878102" watchObservedRunningTime="2026-01-09 10:57:42.029268136 +0000 UTC m=+707.479172937" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.053778 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" podStartSLOduration=1.98423102 podStartE2EDuration="6.053758743s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.129075396 +0000 UTC m=+702.578980177" lastFinishedPulling="2026-01-09 10:57:41.198603129 +0000 UTC m=+706.648507900" observedRunningTime="2026-01-09 10:57:42.049252651 +0000 UTC m=+707.499157442" watchObservedRunningTime="2026-01-09 10:57:42.053758743 +0000 UTC m=+707.503663534" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.075940 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079048 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" containerID="cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079284 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" containerID="cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079133 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079170 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" containerID="cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079221 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" containerID="cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079226 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" containerID="cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079819 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" containerID="cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.129931 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" containerID="cri-o://38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.425745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.429707 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-acl-logging/0.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.430723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-controller/0.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.432304 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462543 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462709 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462823 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462811 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462887 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462911 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462930 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462907 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket" (OuterVolumeSpecName: "log-socket") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462962 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462977 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462999 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash" (OuterVolumeSpecName: "host-slash") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463093 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463021 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463188 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log" (OuterVolumeSpecName: "node-log") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463363 4727 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463379 4727 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463391 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463404 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463414 4727 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463426 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463436 4727 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463445 4727 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463455 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463465 4727 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463475 4727 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463484 4727 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463496 4727 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463358 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463607 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.464499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.477407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl" (OuterVolumeSpecName: "kube-api-access-d4rgl") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "kube-api-access-d4rgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.478090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.487426 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.501792 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sgflm"] Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502259 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502303 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502312 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502326 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502337 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502349 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502358 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502378 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502389 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502395 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502404 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502419 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502433 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502440 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502455 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502462 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502469 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kubecfg-setup" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502475 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kubecfg-setup" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502619 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502635 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502642 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502650 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502659 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502667 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502673 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502683 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502691 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502700 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502709 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502719 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502833 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502841 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502850 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502856 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.504816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.563938 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564180 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565183 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565306 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565390 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566609 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566633 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566645 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566690 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566707 4727 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566723 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566757 4727 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668188 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668361 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668573 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668812 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668861 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669070 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669149 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.673193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.686192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.820232 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.828590 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.019354 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.021501 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-acl-logging/0.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.022031 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-controller/0.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023024 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023054 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023071 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023082 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023090 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023097 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023105 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" exitCode=143 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023114 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" exitCode=143 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023208 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023237 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023396 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023442 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023453 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023459 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023465 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023471 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023477 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023484 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023490 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023499 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023545 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023560 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023569 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023577 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023584 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023590 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023596 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023602 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023608 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023613 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023618 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023636 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023642 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023647 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023652 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023658 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023666 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023672 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023678 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023684 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023689 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023705 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023711 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023718 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023724 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023729 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023735 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023740 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023746 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023751 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023756 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023827 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042104 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042537 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042576 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" exitCode=2 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042662 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.043725 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.044185 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerDied","Data":"126918a79692264b592239126cfbf4ecf54be1f24564a8c81bcc09429ded42ae"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052638 4727 generic.go:334] "Generic (PLEG): container finished" podID="dbb43a9b-cf31-4705-9d1e-0447d2520ef6" containerID="126918a79692264b592239126cfbf4ecf54be1f24564a8c81bcc09429ded42ae" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"9ba0a3778a79450334ce9ba2bbaf2db984b061ad3b6e8325cce6aaf29770eddf"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.100822 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.130956 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.136076 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.136265 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.171313 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.186364 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.202877 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.218109 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.231827 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.270444 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.292580 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.329795 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.330469 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.330696 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.330735 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.331150 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331176 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331195 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.331873 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331941 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331988 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.332538 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.332601 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.332638 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.333195 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333288 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.333749 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333793 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333811 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334197 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334237 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334265 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334621 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334660 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334684 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334989 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335016 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335032 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.335347 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335377 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335394 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335858 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335876 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336192 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336224 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336597 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336616 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336870 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336905 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337278 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337302 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337699 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337723 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338094 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338121 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338385 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338442 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338814 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338839 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339133 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339156 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339663 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339683 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340196 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340230 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340589 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340612 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340870 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340898 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341142 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341166 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341612 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341635 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342011 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342033 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342336 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342365 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342767 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342796 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343047 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343069 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343337 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343356 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343923 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343940 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344295 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344317 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344718 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344736 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345024 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345043 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345290 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345315 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345584 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345617 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345892 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345920 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346166 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346185 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346501 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346533 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346838 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064206 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"cc86c5a2adf97170714efae5e4a9dbeb3ade1a2a2f330bcc7c5e63899dd38085"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"18ff2d4c2fc9a21816afcdb8664f3f354d174ec4f28d56b4129d2d2f54d86fac"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"f929a9be47ad2af0147e428696085c4a248cfdb8be709778bff92346d93e1be1"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"f9b81f27cc75f27204bce6e56eeb1eb194252ccdf09bc3662711efe3184e517a"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064669 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"fca87e3dc1a22a45db30000b59b85a55e2acecdb2d0c88a0aab738c0275f3a47"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"96c5486c198d439ff658afed4a3e5a9d006323c69712c441b637ead0840b8c7a"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.867834 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" path="/var/lib/kubelet/pods/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/volumes" Jan 09 10:57:51 crc kubenswrapper[4727]: I0109 10:57:51.099554 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"197bded7de6d4124ea1df8cf7d8ae446c4892e997e088b615337bc9a8a502bf4"} Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.116962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"3f6336f513cdb444dfdeac4313fa3385bf0c9a10ad2dcc94f05b26c43409b9d3"} Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117454 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117492 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117532 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.149724 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.158553 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" podStartSLOduration=7.158533404 podStartE2EDuration="7.158533404s" podCreationTimestamp="2026-01-09 10:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:57:53.149977853 +0000 UTC m=+718.599882644" watchObservedRunningTime="2026-01-09 10:57:53.158533404 +0000 UTC m=+718.608438185" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.167465 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:55 crc kubenswrapper[4727]: I0109 10:57:55.200414 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:57:56 crc kubenswrapper[4727]: I0109 10:57:56.144616 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:57:59 crc kubenswrapper[4727]: I0109 10:57:59.860053 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:57:59 crc kubenswrapper[4727]: E0109 10:57:59.860922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:58:09 crc kubenswrapper[4727]: I0109 10:58:09.405656 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:58:09 crc kubenswrapper[4727]: I0109 10:58:09.406448 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:58:10 crc kubenswrapper[4727]: I0109 10:58:10.860032 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:58:12 crc kubenswrapper[4727]: I0109 10:58:12.246252 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:58:12 crc kubenswrapper[4727]: I0109 10:58:12.246923 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"9d3cd3d06b0c9e101ffd0febe37ef5a4cfde2cca5e75c9f3f4c24060cd039932"} Jan 09 10:58:16 crc kubenswrapper[4727]: I0109 10:58:16.847923 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.177795 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.179972 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.182419 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.188208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.200914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.200989 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.201016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.301998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302117 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.303052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.325138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.500955 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.723217 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: W0109 10:58:28.727744 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb997fa3_0e55_46ca_b666_d4b710fe2bef.slice/crio-0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a WatchSource:0}: Error finding container 0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a: Status 404 returned error can't find the container with id 0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640340 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="68a19a8966a90aaacb3c61d973589a87d1c5429eab6039f0a54b20ac0b9be5bf" exitCode=0 Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"68a19a8966a90aaacb3c61d973589a87d1c5429eab6039f0a54b20ac0b9be5bf"} Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640529 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerStarted","Data":"0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a"} Jan 09 10:58:31 crc kubenswrapper[4727]: I0109 10:58:31.653883 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="1f011cad76375d514a22721bf83e8135db90dfe7477723ba431b56651935ae2e" exitCode=0 Jan 09 10:58:31 crc kubenswrapper[4727]: I0109 10:58:31.653942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"1f011cad76375d514a22721bf83e8135db90dfe7477723ba431b56651935ae2e"} Jan 09 10:58:32 crc kubenswrapper[4727]: I0109 10:58:32.663606 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="4de084d41b428c101bfd2216e77e4024d1c53bd2397c213a6dbcdc1ac632fa67" exitCode=0 Jan 09 10:58:32 crc kubenswrapper[4727]: I0109 10:58:32.663678 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"4de084d41b428c101bfd2216e77e4024d1c53bd2397c213a6dbcdc1ac632fa67"} Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.899734 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985842 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.986801 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle" (OuterVolumeSpecName: "bundle") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.993002 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j" (OuterVolumeSpecName: "kube-api-access-dn88j") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "kube-api-access-dn88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.000180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util" (OuterVolumeSpecName: "util") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087478 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087584 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087601 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.680833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a"} Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.681788 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.680968 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.496836 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497184 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="util" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497200 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="util" Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497275 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="pull" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497283 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="pull" Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497302 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497434 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.498068 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.500826 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.501094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jfh6k" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.501304 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.514048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.629730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.731330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.751297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.815756 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:37 crc kubenswrapper[4727]: I0109 10:58:37.022957 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:37 crc kubenswrapper[4727]: I0109 10:58:37.700088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" event={"ID":"b4c7550e-1eaa-4e85-b44d-c752f6e37955","Type":"ContainerStarted","Data":"e5d9d507c977c1a136a3db9ca4e1875e60ef08f63cb83d834b263ea6d75131c8"} Jan 09 10:58:39 crc kubenswrapper[4727]: I0109 10:58:39.405560 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:58:39 crc kubenswrapper[4727]: I0109 10:58:39.405899 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:58:40 crc kubenswrapper[4727]: I0109 10:58:40.719535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" event={"ID":"b4c7550e-1eaa-4e85-b44d-c752f6e37955","Type":"ContainerStarted","Data":"d81a9d168758de43d8a35522ca8bbb7ddeecaac756eb9506fa3e39002f9d5635"} Jan 09 10:58:40 crc kubenswrapper[4727]: I0109 10:58:40.746489 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" podStartSLOduration=1.777079177 podStartE2EDuration="4.746459846s" podCreationTimestamp="2026-01-09 10:58:36 +0000 UTC" firstStartedPulling="2026-01-09 10:58:37.040380185 +0000 UTC m=+762.490284966" lastFinishedPulling="2026-01-09 10:58:40.009760854 +0000 UTC m=+765.459665635" observedRunningTime="2026-01-09 10:58:40.743441468 +0000 UTC m=+766.193346249" watchObservedRunningTime="2026-01-09 10:58:40.746459846 +0000 UTC m=+766.196364657" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.796744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.798207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.809734 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.810635 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.812962 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.818106 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.819446 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n8254" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.834814 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.862970 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4757d"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.863907 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907462 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.965735 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.966595 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.969686 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.969977 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r7g68" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.971033 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.983626 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009147 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009229 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009281 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009361 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.010305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.010449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.010534 4727 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.010599 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair podName:7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac nodeName:}" failed. No retries permitted until 2026-01-09 10:58:42.510575416 +0000 UTC m=+767.960480227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair") pod "nmstate-webhook-f8fb84555-5lc88" (UID: "7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac") : secret "openshift-nmstate-webhook" not found Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032393 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032436 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.110539 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.111575 4727 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.111634 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert podName:9721a7da-2c8a-4a0d-ac56-8b4b11c028cd nodeName:}" failed. No retries permitted until 2026-01-09 10:58:42.611620886 +0000 UTC m=+768.061525667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-6dwzn" (UID: "9721a7da-2c8a-4a0d-ac56-8b4b11c028cd") : secret "plugin-serving-cert" not found Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.120050 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.139555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.178981 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.181790 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.182018 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.210087 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214200 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: W0109 10:58:42.234951 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod673fefde_8c1b_46fe_a88a_00b3fa962a3e.slice/crio-8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839 WatchSource:0}: Error finding container 8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839: Status 404 returned error can't find the container with id 8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839 Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.320253 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.321421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.323198 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.330574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.332275 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.346381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.504213 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.523601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.534577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.625025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.629918 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.736079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.747454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4757d" event={"ID":"673fefde-8c1b-46fe-a88a-00b3fa962a3e","Type":"ContainerStarted","Data":"8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839"} Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.883628 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.922608 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.962712 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.485676 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:43 crc kubenswrapper[4727]: W0109 10:58:43.503188 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b8d8f1f_d4d5_4716_818f_6f5bbf6a2dac.slice/crio-e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db WatchSource:0}: Error finding container e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db: Status 404 returned error can't find the container with id e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.756607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"bc5b232de035f7830cbd1039e4b37013034cb3ea57a653dd083955da8a69096e"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.757715 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.758575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64db668f99-2zfcx" event={"ID":"fb2c8fec-8292-49e4-967f-ac24fe73971b","Type":"ContainerStarted","Data":"98c3ef45797a650b0546861b0d1a903076f2352aa78d24e8ca67f2a3bbb45410"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.758615 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64db668f99-2zfcx" event={"ID":"fb2c8fec-8292-49e4-967f-ac24fe73971b","Type":"ContainerStarted","Data":"c3c31376968e59b6b95ea898756e26082245575dcb97b06007bdadd2d79eebb0"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.760304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" event={"ID":"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac","Type":"ContainerStarted","Data":"e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db"} Jan 09 10:58:43 crc kubenswrapper[4727]: W0109 10:58:43.762720 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9721a7da_2c8a_4a0d_ac56_8b4b11c028cd.slice/crio-2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9 WatchSource:0}: Error finding container 2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9: Status 404 returned error can't find the container with id 2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9 Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.781935 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64db668f99-2zfcx" podStartSLOduration=1.781900937 podStartE2EDuration="1.781900937s" podCreationTimestamp="2026-01-09 10:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:58:43.7796858 +0000 UTC m=+769.229590581" watchObservedRunningTime="2026-01-09 10:58:43.781900937 +0000 UTC m=+769.231805718" Jan 09 10:58:44 crc kubenswrapper[4727]: I0109 10:58:44.769560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" event={"ID":"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd","Type":"ContainerStarted","Data":"2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9"} Jan 09 10:58:45 crc kubenswrapper[4727]: I0109 10:58:45.995246 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.884380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"629ae3ccbc0688940e7c4e521882edb3ef170568626e7791955c7debe0e89daf"} Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.887306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" event={"ID":"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac","Type":"ContainerStarted","Data":"36c025d7a98dd5a684fac187ba986b40ef54701d9c86e601c187476b41e3647e"} Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.887452 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.913137 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" podStartSLOduration=2.7806615949999998 podStartE2EDuration="6.913113732s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:43.516803213 +0000 UTC m=+768.966707994" lastFinishedPulling="2026-01-09 10:58:47.64925533 +0000 UTC m=+773.099160131" observedRunningTime="2026-01-09 10:58:47.903823141 +0000 UTC m=+773.353727962" watchObservedRunningTime="2026-01-09 10:58:47.913113732 +0000 UTC m=+773.363018533" Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.896970 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4757d" event={"ID":"673fefde-8c1b-46fe-a88a-00b3fa962a3e","Type":"ContainerStarted","Data":"f2c8e8daa9a45a5ead4a602cefae7bb4736e062dff1e9b891726842cf0403173"} Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.897152 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.918495 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4757d" podStartSLOduration=2.528554147 podStartE2EDuration="7.918451711s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:42.237560581 +0000 UTC m=+767.687465362" lastFinishedPulling="2026-01-09 10:58:47.627458145 +0000 UTC m=+773.077362926" observedRunningTime="2026-01-09 10:58:48.916765536 +0000 UTC m=+774.366670347" watchObservedRunningTime="2026-01-09 10:58:48.918451711 +0000 UTC m=+774.368356492" Jan 09 10:58:50 crc kubenswrapper[4727]: I0109 10:58:50.022910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" event={"ID":"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd","Type":"ContainerStarted","Data":"9bcbfcbca46344634fa168dd8d5e003cfd46a50e16d3456b4087bd6626cb9232"} Jan 09 10:58:50 crc kubenswrapper[4727]: I0109 10:58:50.055682 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" podStartSLOduration=4.030069872 podStartE2EDuration="9.055649529s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:43.770796919 +0000 UTC m=+769.220701700" lastFinishedPulling="2026-01-09 10:58:48.796376576 +0000 UTC m=+774.246281357" observedRunningTime="2026-01-09 10:58:50.038427863 +0000 UTC m=+775.488332654" watchObservedRunningTime="2026-01-09 10:58:50.055649529 +0000 UTC m=+775.505554310" Jan 09 10:58:51 crc kubenswrapper[4727]: I0109 10:58:51.032151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"ccc9a181f2c0c897bc98ffd67ee62069007e4214f085cbf72f7f1d64cd7cfb01"} Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.225238 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.269137 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" podStartSLOduration=4.00799404 podStartE2EDuration="11.269115115s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:42.930751876 +0000 UTC m=+768.380656667" lastFinishedPulling="2026-01-09 10:58:50.191872961 +0000 UTC m=+775.641777742" observedRunningTime="2026-01-09 10:58:51.065139926 +0000 UTC m=+776.515044727" watchObservedRunningTime="2026-01-09 10:58:52.269115115 +0000 UTC m=+777.719019896" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.505204 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.505265 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.512088 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:53 crc kubenswrapper[4727]: I0109 10:58:53.048402 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:53 crc kubenswrapper[4727]: I0109 10:58:53.112758 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:02 crc kubenswrapper[4727]: I0109 10:59:02.743019 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.405466 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406124 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406172 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406965 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.407021 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" gracePeriod=600 Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163353 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" exitCode=0 Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163876 4727 scope.go:117] "RemoveContainer" containerID="fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" Jan 09 10:59:11 crc kubenswrapper[4727]: I0109 10:59:11.173492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.718790 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.720836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.724064 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.736926 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835801 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.937907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.937996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.938089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.939428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.939475 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.962702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.061742 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.172960 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" containerID="cri-o://3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" gracePeriod=15 Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.654452 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.685401 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pjc7c_bab7ad75-cb15-4910-a013-e9cafba90f73/console/0.log" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.685496 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778272 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778500 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779694 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config" (OuterVolumeSpecName: "console-config") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779995 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.780023 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca" (OuterVolumeSpecName: "service-ca") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.785823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.785868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r" (OuterVolumeSpecName: "kube-api-access-4gr6r") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "kube-api-access-4gr6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.787410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884363 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884407 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884422 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884435 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884445 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884458 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884468 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.230795 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="e1d03b69e93c7555701bb7210d9ea40ac4a6412d17bbb511efe9fc4f2222a8c6" exitCode=0 Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.230915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"e1d03b69e93c7555701bb7210d9ea40ac4a6412d17bbb511efe9fc4f2222a8c6"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.231014 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerStarted","Data":"59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.232942 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pjc7c_bab7ad75-cb15-4910-a013-e9cafba90f73/console/0.log" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.232997 4727 generic.go:334] "Generic (PLEG): container finished" podID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" exitCode=2 Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerDied","Data":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerDied","Data":"929125b8b64331d2d6d391ab423a97e682d7d12d88e3ecc772238a6afa971136"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233072 4727 scope.go:117] "RemoveContainer" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233146 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.269305 4727 scope.go:117] "RemoveContainer" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: E0109 10:59:19.270536 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": container with ID starting with 3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87 not found: ID does not exist" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.270588 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} err="failed to get container status \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": rpc error: code = NotFound desc = could not find container \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": container with ID starting with 3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87 not found: ID does not exist" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.288964 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.295980 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.066399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: E0109 10:59:20.068169 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.068254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.068422 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.069502 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.087110 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.101802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.101875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.102163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.204180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.204960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.226153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.401200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.721413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.869837 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" path="/var/lib/kubelet/pods/bab7ad75-cb15-4910-a013-e9cafba90f73/volumes" Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.267534 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="068d57e544a7f765940c2e31941f158bd5738a97c4e9fe4480c33141ea5d005e" exitCode=0 Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.267618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"068d57e544a7f765940c2e31941f158bd5738a97c4e9fe4480c33141ea5d005e"} Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.272401 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3" exitCode=0 Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.273251 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3"} Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.273309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"a7cc6e803dcd8ccf5e35b8cab1998e2b0c7b415f7a7b149ea44d0f438b8a28f0"} Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.285818 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="aacfd1be3752e14fef9f75b5b32ba897ad74216e0490cef7e76a1aeefd5da5cc" exitCode=0 Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.285895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"aacfd1be3752e14fef9f75b5b32ba897ad74216e0490cef7e76a1aeefd5da5cc"} Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.290595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb"} Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.747172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897503 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.899831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle" (OuterVolumeSpecName: "bundle") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.904728 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h" (OuterVolumeSpecName: "kube-api-access-nxx6h") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "kube-api-access-nxx6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.913687 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util" (OuterVolumeSpecName: "util") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999577 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999616 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999630 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd"} Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309716 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309136 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.311911 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb" exitCode=0 Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.311978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb"} Jan 09 10:59:25 crc kubenswrapper[4727]: I0109 10:59:25.322205 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022"} Jan 09 10:59:25 crc kubenswrapper[4727]: I0109 10:59:25.344792 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bqw8" podStartSLOduration=1.671936512 podStartE2EDuration="5.344766974s" podCreationTimestamp="2026-01-09 10:59:20 +0000 UTC" firstStartedPulling="2026-01-09 10:59:21.276707343 +0000 UTC m=+806.726612124" lastFinishedPulling="2026-01-09 10:59:24.949537805 +0000 UTC m=+810.399442586" observedRunningTime="2026-01-09 10:59:25.341290772 +0000 UTC m=+810.791195573" watchObservedRunningTime="2026-01-09 10:59:25.344766974 +0000 UTC m=+810.794671765" Jan 09 10:59:30 crc kubenswrapper[4727]: I0109 10:59:30.402369 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:30 crc kubenswrapper[4727]: I0109 10:59:30.402771 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:31 crc kubenswrapper[4727]: I0109 10:59:31.444185 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bqw8" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" probeResult="failure" output=< Jan 09 10:59:31 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:59:31 crc kubenswrapper[4727]: > Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.092810 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093658 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="util" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093678 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="util" Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093705 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="pull" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093714 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="pull" Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093739 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093877 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.094566 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.096544 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.097113 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.097431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fdlt9" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.098963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.102668 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.116694 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.141636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.141811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.142015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244226 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244360 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.255271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.259817 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.266916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.332580 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.333391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.336066 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.336200 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.337107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ctwcm" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.349762 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.414286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.451886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.452238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.452298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.460816 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.464440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.477385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.648759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.759112 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:35 crc kubenswrapper[4727]: I0109 10:59:35.385664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" event={"ID":"d7eb33c1-26fc-47be-8c5b-f235afa77ea8","Type":"ContainerStarted","Data":"7f48e767fabdfa06cafbe5d850a392bb64a1f11f8d46a5008f86e739de73024a"} Jan 09 10:59:35 crc kubenswrapper[4727]: I0109 10:59:35.489223 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:35 crc kubenswrapper[4727]: W0109 10:59:35.496639 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3f738e6_a0bc_42cd_b4d8_71940837e09f.slice/crio-6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4 WatchSource:0}: Error finding container 6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4: Status 404 returned error can't find the container with id 6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4 Jan 09 10:59:36 crc kubenswrapper[4727]: I0109 10:59:36.399056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" event={"ID":"d3f738e6-a0bc-42cd-b4d8-71940837e09f","Type":"ContainerStarted","Data":"6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4"} Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.502643 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.556277 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.734352 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:42 crc kubenswrapper[4727]: I0109 10:59:42.445035 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bqw8" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" containerID="cri-o://f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" gracePeriod=2 Jan 09 10:59:43 crc kubenswrapper[4727]: I0109 10:59:43.454894 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" exitCode=0 Jan 09 10:59:43 crc kubenswrapper[4727]: I0109 10:59:43.454945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022"} Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.744883 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942759 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942938 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942983 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.944111 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities" (OuterVolumeSpecName: "utilities") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.949332 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878" (OuterVolumeSpecName: "kube-api-access-z7878") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "kube-api-access-z7878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.045075 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.045579 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.058802 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.155646 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.478847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"a7cc6e803dcd8ccf5e35b8cab1998e2b0c7b415f7a7b149ea44d0f438b8a28f0"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.478926 4727 scope.go:117] "RemoveContainer" containerID="f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.479560 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.480974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" event={"ID":"d3f738e6-a0bc-42cd-b4d8-71940837e09f","Type":"ContainerStarted","Data":"496ac14fde94ebfa73edd4f4f740ba85472ce45fa93725992cbad4c2b32d953c"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.481167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.484308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" event={"ID":"d7eb33c1-26fc-47be-8c5b-f235afa77ea8","Type":"ContainerStarted","Data":"1d86bebf950d90185802e82ee5f4f149cdf9b7c09897138859c5e53f5330d4a8"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.484589 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.500814 4727 scope.go:117] "RemoveContainer" containerID="3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.509863 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" podStartSLOduration=2.647153178 podStartE2EDuration="12.509844305s" podCreationTimestamp="2026-01-09 10:59:34 +0000 UTC" firstStartedPulling="2026-01-09 10:59:35.500132894 +0000 UTC m=+820.950037675" lastFinishedPulling="2026-01-09 10:59:45.362824021 +0000 UTC m=+830.812728802" observedRunningTime="2026-01-09 10:59:46.508663123 +0000 UTC m=+831.958567914" watchObservedRunningTime="2026-01-09 10:59:46.509844305 +0000 UTC m=+831.959749086" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.520180 4727 scope.go:117] "RemoveContainer" containerID="c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.551955 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" podStartSLOduration=2.031107409 podStartE2EDuration="12.55193217s" podCreationTimestamp="2026-01-09 10:59:34 +0000 UTC" firstStartedPulling="2026-01-09 10:59:34.815494088 +0000 UTC m=+820.265398869" lastFinishedPulling="2026-01-09 10:59:45.336318849 +0000 UTC m=+830.786223630" observedRunningTime="2026-01-09 10:59:46.549465414 +0000 UTC m=+831.999370195" watchObservedRunningTime="2026-01-09 10:59:46.55193217 +0000 UTC m=+832.001836951" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.574351 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.577646 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.869212 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" path="/var/lib/kubelet/pods/42a2f991-4bd0-4eba-84c9-e5020d40afd0/volumes" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.167014 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168086 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168123 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-utilities" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168131 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-utilities" Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-content" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-content" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168278 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168852 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.171618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.172266 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.180544 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374741 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374822 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374898 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.376116 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.384010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.396262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.535436 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.766028 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596379 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerID="b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007" exitCode=0 Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerDied","Data":"b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007"} Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerStarted","Data":"5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34"} Jan 09 11:00:02 crc kubenswrapper[4727]: I0109 11:00:02.834929 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.009962 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.010191 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.010225 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.011428 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.017796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.018395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4" (OuterVolumeSpecName: "kube-api-access-lvdh4") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "kube-api-access-lvdh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112111 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112180 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112202 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614176 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerDied","Data":"5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34"} Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614356 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34" Jan 09 11:00:04 crc kubenswrapper[4727]: I0109 11:00:04.654655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 11:00:24 crc kubenswrapper[4727]: I0109 11:00:24.420603 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.250925 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xvvzt"] Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.251327 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.251351 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.251482 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.253710 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.256164 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.257429 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.258124 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lvktz" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.260025 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.262226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.263539 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.277205 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281939 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282023 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282058 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282090 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.350829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ls2r2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.351737 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.354939 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.354948 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.355174 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gmz4p" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.356114 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.365671 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.368073 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.370253 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383442 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383491 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384019 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384148 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.384259 4727 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.384316 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert podName:ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.88429719 +0000 UTC m=+871.334201971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert") pod "frr-k8s-webhook-server-7784b6fcf-6msbv" (UID: "ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee") : secret "frr-k8s-webhook-server-cert" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384316 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.390586 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.407238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.418406 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.445653 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485301 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485569 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485595 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485660 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985635511 +0000 UTC m=+871.435540292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485733 4727 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485732 4727 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485806 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985782535 +0000 UTC m=+871.435687316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "speaker-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485826 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs podName:da86c323-c171-499f-8e25-74532f7c1fca nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985815976 +0000 UTC m=+871.435720757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs") pod "controller-5bddd4b946-ljds2" (UID: "da86c323-c171-499f-8e25-74532f7c1fca") : secret "controller-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.486449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.489803 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.516237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.520571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.534360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.584650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.802293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"efd4c49e809e6c226292509c4ab0ab548700c565e167915c92891158d9076137"} Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.891879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.898430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997686 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.998980 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.999088 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:26.999045889 +0000 UTC m=+872.448950680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "metallb-memberlist" not found Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.001647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.001858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.197320 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.287583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.496002 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.654590 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"67f5223dee7e4ce371ce2b4f2734ebcec5a20080c1da731465e5444d898377c7"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811426 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"bfd5b5a5bd0846086d6f863108057ac49770c8d9bd85e84acc04ffa17c7f9637"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"83766bc7e4255f41827f3779589be93806699bdb1962a65bab36119f0b5e8ec2"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811571 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.817401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" event={"ID":"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee","Type":"ContainerStarted","Data":"8bd8725a9cb6287049f726a46d371b42b138b6fd40b2a1a5e4ced8cd2a7fa877"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.834737 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-ljds2" podStartSLOduration=1.834706213 podStartE2EDuration="1.834706213s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:00:26.828992379 +0000 UTC m=+872.278897170" watchObservedRunningTime="2026-01-09 11:00:26.834706213 +0000 UTC m=+872.284611034" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.014694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.023488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.169890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: W0109 11:00:27.206962 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ffb75e8_9dff_48d1_952b_a07637adfceb.slice/crio-92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55 WatchSource:0}: Error finding container 92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55: Status 404 returned error can't find the container with id 92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55 Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834421 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"a2184598e0014888261a5ae6fffb04fa04a45f0f9ff1fa8a2fef4373e7d5b9ad"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"af295ea6cbca9efecfee2835c5c02cb6feabccb976cc7af0cf98636fa5f0298f"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834992 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.872266 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ls2r2" podStartSLOduration=2.8722486480000002 podStartE2EDuration="2.872248648s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:00:27.867459878 +0000 UTC m=+873.317364659" watchObservedRunningTime="2026-01-09 11:00:27.872248648 +0000 UTC m=+873.322153429" Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.887030 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" event={"ID":"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee","Type":"ContainerStarted","Data":"114603ab77cc97784a303c62f913ae683a3cdfd182ea0890b943084f6ebeec0a"} Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.887723 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.894218 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="336cfe712fca2d662d2ed36f9c76bb33ff8d3f0bddb65fcea4e9b55d1bea319b" exitCode=0 Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.895702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"336cfe712fca2d662d2ed36f9c76bb33ff8d3f0bddb65fcea4e9b55d1bea319b"} Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.916311 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" podStartSLOduration=2.571172402 podStartE2EDuration="9.916285345s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="2026-01-09 11:00:26.658829982 +0000 UTC m=+872.108734763" lastFinishedPulling="2026-01-09 11:00:34.003942925 +0000 UTC m=+879.453847706" observedRunningTime="2026-01-09 11:00:34.916263855 +0000 UTC m=+880.366168656" watchObservedRunningTime="2026-01-09 11:00:34.916285345 +0000 UTC m=+880.366190126" Jan 09 11:00:35 crc kubenswrapper[4727]: I0109 11:00:35.903076 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="7d31d2b0dcba99d43aeb055586eb04e04a9fe8526a5e31dd79fd2c79733bb673" exitCode=0 Jan 09 11:00:35 crc kubenswrapper[4727]: I0109 11:00:35.903205 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"7d31d2b0dcba99d43aeb055586eb04e04a9fe8526a5e31dd79fd2c79733bb673"} Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.291608 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.911705 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="bdc6946f96e0e7dcf5dbb1e4a5c63b96a5a0e85b3e34acd19aef89c6aaf0797f" exitCode=0 Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.911752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"bdc6946f96e0e7dcf5dbb1e4a5c63b96a5a0e85b3e34acd19aef89c6aaf0797f"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.173906 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"e987960defdc214ea336ca385c3cdae993f275154fc21c88469e51440955ef47"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932960 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"154298782c72eff5b329379a1df10c8f873e06e3df6e3cfd41c4e099ce2dcfaf"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"21abf60c1c460debbfd2b2e62734c491f1703f84fc29fd284b01fc041c120fff"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932990 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"92927aefb40e1e389ba422fd9a2856038683fc577889db6b247a0cacda12dbd9"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.933002 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"31ed0c4e67371ff5f51eee6194b44961ad944c25322548eb4d4668475a3881e2"} Jan 09 11:00:38 crc kubenswrapper[4727]: I0109 11:00:38.960378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"8e02e46b8c7432be7961a22e210442f72753127391562bc40579f00699015f7b"} Jan 09 11:00:38 crc kubenswrapper[4727]: I0109 11:00:38.961623 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.480569 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xvvzt" podStartSLOduration=7.267998937 podStartE2EDuration="15.480494748s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="2026-01-09 11:00:25.775227466 +0000 UTC m=+871.225132247" lastFinishedPulling="2026-01-09 11:00:33.987723257 +0000 UTC m=+879.437628058" observedRunningTime="2026-01-09 11:00:38.989162762 +0000 UTC m=+884.439067553" watchObservedRunningTime="2026-01-09 11:00:40.480494748 +0000 UTC m=+885.930399589" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.488468 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.489577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.492517 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.495929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7c9d7" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.496933 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.499786 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.536787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.585590 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.638483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.642383 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.666033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.855428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:41 crc kubenswrapper[4727]: I0109 11:00:41.092492 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:41 crc kubenswrapper[4727]: I0109 11:00:41.982791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerStarted","Data":"feca25ae368a63dde2a5507266735a1dc7c994fd1e913e4707c207605e844510"} Jan 09 11:00:43 crc kubenswrapper[4727]: I0109 11:00:43.850469 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:43 crc kubenswrapper[4727]: I0109 11:00:43.999799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerStarted","Data":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.027212 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8pfvp" podStartSLOduration=1.898077695 podStartE2EDuration="4.027185281s" podCreationTimestamp="2026-01-09 11:00:40 +0000 UTC" firstStartedPulling="2026-01-09 11:00:41.104132086 +0000 UTC m=+886.554036857" lastFinishedPulling="2026-01-09 11:00:43.233239662 +0000 UTC m=+888.683144443" observedRunningTime="2026-01-09 11:00:44.020374258 +0000 UTC m=+889.470279049" watchObservedRunningTime="2026-01-09 11:00:44.027185281 +0000 UTC m=+889.477090072" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.458120 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.460052 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.469320 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.502257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.603614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.630938 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.824282 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.009386 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8pfvp" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" containerID="cri-o://316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" gracePeriod=2 Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.096133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:45 crc kubenswrapper[4727]: W0109 11:00:45.109991 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26bfbd30_40a2_466a_862d_6cdf25911f85.slice/crio-c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860 WatchSource:0}: Error finding container c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860: Status 404 returned error can't find the container with id c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860 Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.331000 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.412591 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.419049 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx" (OuterVolumeSpecName: "kube-api-access-992xx") pod "6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" (UID: "6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2"). InnerVolumeSpecName "kube-api-access-992xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.514357 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.018568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj5kr" event={"ID":"26bfbd30-40a2-466a-862d-6cdf25911f85","Type":"ContainerStarted","Data":"37dd237b8519cd9fa72e3ff3fe52e570212af7f2614e8cd49820095b682e3f8a"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.019001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj5kr" event={"ID":"26bfbd30-40a2-466a-862d-6cdf25911f85","Type":"ContainerStarted","Data":"c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.019968 4727 generic.go:334] "Generic (PLEG): container finished" podID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" exitCode=0 Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020003 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerDied","Data":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerDied","Data":"feca25ae368a63dde2a5507266735a1dc7c994fd1e913e4707c207605e844510"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020037 4727 scope.go:117] "RemoveContainer" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020138 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.045667 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cj5kr" podStartSLOduration=1.987715713 podStartE2EDuration="2.045642594s" podCreationTimestamp="2026-01-09 11:00:44 +0000 UTC" firstStartedPulling="2026-01-09 11:00:45.114833947 +0000 UTC m=+890.564738728" lastFinishedPulling="2026-01-09 11:00:45.172760828 +0000 UTC m=+890.622665609" observedRunningTime="2026-01-09 11:00:46.040435324 +0000 UTC m=+891.490340135" watchObservedRunningTime="2026-01-09 11:00:46.045642594 +0000 UTC m=+891.495547415" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.052254 4727 scope.go:117] "RemoveContainer" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: E0109 11:00:46.053206 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": container with ID starting with 316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc not found: ID does not exist" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.053266 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} err="failed to get container status \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": rpc error: code = NotFound desc = could not find container \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": container with ID starting with 316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc not found: ID does not exist" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.077820 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.083383 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.205502 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.912334 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" path="/var/lib/kubelet/pods/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2/volumes" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.669813 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:51 crc kubenswrapper[4727]: E0109 11:00:51.671314 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.671434 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.671794 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.673765 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.687162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711735 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.847564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:52 crc kubenswrapper[4727]: I0109 11:00:52.003097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:52 crc kubenswrapper[4727]: I0109 11:00:52.483974 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:52 crc kubenswrapper[4727]: W0109 11:00:52.493750 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8603c28d_35ff_45d9_a606_1ebc68271a2c.slice/crio-7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b WatchSource:0}: Error finding container 7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b: Status 404 returned error can't find the container with id 7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.077786 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657" exitCode=0 Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.078111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657"} Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.078133 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerStarted","Data":"7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b"} Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.090719 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e" exitCode=0 Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.090810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e"} Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.824804 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.825310 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.875014 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.100182 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerStarted","Data":"f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007"} Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.123449 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gfmgm" podStartSLOduration=2.471561655 podStartE2EDuration="4.123420818s" podCreationTimestamp="2026-01-09 11:00:51 +0000 UTC" firstStartedPulling="2026-01-09 11:00:53.080616358 +0000 UTC m=+898.530521139" lastFinishedPulling="2026-01-09 11:00:54.732475521 +0000 UTC m=+900.182380302" observedRunningTime="2026-01-09 11:00:55.121173957 +0000 UTC m=+900.571078748" watchObservedRunningTime="2026-01-09 11:00:55.123420818 +0000 UTC m=+900.573325599" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.133292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.587783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.088814 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.090869 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.093443 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6c6hf" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.099670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.298978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299286 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.324719 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.417613 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.700361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124607 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="6fa068476cee2339b8cf13515c203667c382610a59e33b81d3a4d3d3a0a10e1d" exitCode=0 Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124654 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"6fa068476cee2339b8cf13515c203667c382610a59e33b81d3a4d3d3a0a10e1d"} Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerStarted","Data":"65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc"} Jan 09 11:00:59 crc kubenswrapper[4727]: I0109 11:00:59.136394 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="2e5abe507d1bbb2278e130fe556066b3ca098731f09074008cfa2cb6203d1837" exitCode=0 Jan 09 11:00:59 crc kubenswrapper[4727]: I0109 11:00:59.136473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"2e5abe507d1bbb2278e130fe556066b3ca098731f09074008cfa2cb6203d1837"} Jan 09 11:01:00 crc kubenswrapper[4727]: I0109 11:01:00.150101 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="fc411ebf90e64855159414d6ca76004d86cccdb5e3fc47200985a19b18737320" exitCode=0 Jan 09 11:01:00 crc kubenswrapper[4727]: I0109 11:01:00.150161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"fc411ebf90e64855159414d6ca76004d86cccdb5e3fc47200985a19b18737320"} Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.449077 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562378 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.563476 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle" (OuterVolumeSpecName: "bundle") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.573094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2" (OuterVolumeSpecName: "kube-api-access-p62x2") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "kube-api-access-p62x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.585348 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util" (OuterVolumeSpecName: "util") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665171 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665361 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665374 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.004747 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.005333 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.071243 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc"} Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166465 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166841 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.209021 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108519 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108781 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108793 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108808 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="pull" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108813 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="pull" Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="util" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108834 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="util" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108950 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.109362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.112798 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8bvpn" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.135964 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.311138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.412664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.433335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.448935 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.449755 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gfmgm" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" containerID="cri-o://f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" gracePeriod=2 Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.724633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.186363 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" exitCode=0 Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.186466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007"} Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.223432 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.334801 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528014 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528719 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.529340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities" (OuterVolumeSpecName: "utilities") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.530713 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.537440 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj" (OuterVolumeSpecName: "kube-api-access-nngwj") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "kube-api-access-nngwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.571845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.632410 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.632793 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.204811 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" event={"ID":"f749f148-ae4b-475b-90d9-1028d134d57c","Type":"ContainerStarted","Data":"56c1335067c352d5069c2953bf0d4764bec967227c1782558152733b21e0e6f8"} Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208753 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b"} Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208801 4727 scope.go:117] "RemoveContainer" containerID="f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208840 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.239425 4727 scope.go:117] "RemoveContainer" containerID="ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.255712 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.267611 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.295126 4727 scope.go:117] "RemoveContainer" containerID="504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657" Jan 09 11:01:06 crc kubenswrapper[4727]: E0109 11:01:06.361430 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8603c28d_35ff_45d9_a606_1ebc68271a2c.slice\": RecentStats: unable to find data in memory cache]" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.867876 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" path="/var/lib/kubelet/pods/8603c28d-35ff-45d9-a606-1ebc68271a2c/volumes" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.456223 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457301 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-content" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457388 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-content" Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457492 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457542 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457565 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-utilities" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457576 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-utilities" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457770 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.459080 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.467859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507799 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.608957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609076 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.610151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.649702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.794026 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:11 crc kubenswrapper[4727]: I0109 11:01:11.757767 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:11 crc kubenswrapper[4727]: W0109 11:01:11.776075 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded95bba5_81db_4552_bb87_8197e56d1164.slice/crio-79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839 WatchSource:0}: Error finding container 79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839: Status 404 returned error can't find the container with id 79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839 Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.260309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" event={"ID":"f749f148-ae4b-475b-90d9-1028d134d57c","Type":"ContainerStarted","Data":"326423a7fa179eda3d4fe6c5fc6ed654a41b92c845e7d9d963d6226d2f0d20a7"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.261598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264193 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff" exitCode=0 Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264254 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.314395 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" podStartSLOduration=1.9694940920000001 podStartE2EDuration="8.314360381s" podCreationTimestamp="2026-01-09 11:01:04 +0000 UTC" firstStartedPulling="2026-01-09 11:01:05.240623731 +0000 UTC m=+910.690528512" lastFinishedPulling="2026-01-09 11:01:11.58549001 +0000 UTC m=+917.035394801" observedRunningTime="2026-01-09 11:01:12.308418789 +0000 UTC m=+917.758323640" watchObservedRunningTime="2026-01-09 11:01:12.314360381 +0000 UTC m=+917.764265242" Jan 09 11:01:13 crc kubenswrapper[4727]: I0109 11:01:13.272233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c"} Jan 09 11:01:14 crc kubenswrapper[4727]: I0109 11:01:14.307135 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c" exitCode=0 Jan 09 11:01:14 crc kubenswrapper[4727]: I0109 11:01:14.307460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c"} Jan 09 11:01:16 crc kubenswrapper[4727]: I0109 11:01:16.323757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28"} Jan 09 11:01:16 crc kubenswrapper[4727]: I0109 11:01:16.353563 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjjfx" podStartSLOduration=3.386835686 podStartE2EDuration="6.353535873s" podCreationTimestamp="2026-01-09 11:01:10 +0000 UTC" firstStartedPulling="2026-01-09 11:01:12.267409477 +0000 UTC m=+917.717314298" lastFinishedPulling="2026-01-09 11:01:15.234109704 +0000 UTC m=+920.684014485" observedRunningTime="2026-01-09 11:01:16.347060315 +0000 UTC m=+921.796965096" watchObservedRunningTime="2026-01-09 11:01:16.353535873 +0000 UTC m=+921.803440694" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.795249 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.796087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.844436 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:21 crc kubenswrapper[4727]: I0109 11:01:21.414085 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:21 crc kubenswrapper[4727]: I0109 11:01:21.463269 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:23 crc kubenswrapper[4727]: I0109 11:01:23.379300 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjjfx" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" containerID="cri-o://27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" gracePeriod=2 Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.390992 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" exitCode=0 Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.391100 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28"} Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.525773 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.533315 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.548429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.737194 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.841970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.842794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.843052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.844556 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.847017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.866034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.869886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.940265 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.049996 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.050397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.050457 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.051436 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities" (OuterVolumeSpecName: "utilities") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.066823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh" (OuterVolumeSpecName: "kube-api-access-kxmwh") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "kube-api-access-kxmwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.132463 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152435 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152483 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152498 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400760 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839"} Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400857 4727 scope.go:117] "RemoveContainer" containerID="27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400861 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.428954 4727 scope.go:117] "RemoveContainer" containerID="63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.449669 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.460584 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.479657 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.484793 4727 scope.go:117] "RemoveContainer" containerID="279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff" Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411367 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e" exitCode=0 Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411402 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e"} Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"6375e166651f8e6dda4be61b8f9e768148ab8e9d2f7cb5925876ed999a6c55dd"} Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.869346 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" path="/var/lib/kubelet/pods/ed95bba5-81db-4552-bb87-8197e56d1164/volumes" Jan 09 11:01:28 crc kubenswrapper[4727]: I0109 11:01:28.428178 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9"} Jan 09 11:01:29 crc kubenswrapper[4727]: I0109 11:01:29.441229 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9" exitCode=0 Jan 09 11:01:29 crc kubenswrapper[4727]: I0109 11:01:29.441303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9"} Jan 09 11:01:30 crc kubenswrapper[4727]: I0109 11:01:30.451860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f"} Jan 09 11:01:30 crc kubenswrapper[4727]: I0109 11:01:30.476693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xmrq" podStartSLOduration=2.715979577 podStartE2EDuration="6.476667185s" podCreationTimestamp="2026-01-09 11:01:24 +0000 UTC" firstStartedPulling="2026-01-09 11:01:26.413677551 +0000 UTC m=+931.863582332" lastFinishedPulling="2026-01-09 11:01:30.174365159 +0000 UTC m=+935.624269940" observedRunningTime="2026-01-09 11:01:30.47305168 +0000 UTC m=+935.922956491" watchObservedRunningTime="2026-01-09 11:01:30.476667185 +0000 UTC m=+935.926571976" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.870502 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.870858 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.911686 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:35 crc kubenswrapper[4727]: I0109 11:01:35.527232 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:35 crc kubenswrapper[4727]: I0109 11:01:35.568523 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:37 crc kubenswrapper[4727]: I0109 11:01:37.494334 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xmrq" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" containerID="cri-o://68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" gracePeriod=2 Jan 09 11:01:38 crc kubenswrapper[4727]: I0109 11:01:38.508769 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" exitCode=0 Jan 09 11:01:38 crc kubenswrapper[4727]: I0109 11:01:38.508857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f"} Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.035892 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.201600 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities" (OuterVolumeSpecName: "utilities") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.215417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf" (OuterVolumeSpecName: "kube-api-access-jgmdf") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "kube-api-access-jgmdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.250712 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302603 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302702 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302722 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.405006 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.405087 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"6375e166651f8e6dda4be61b8f9e768148ab8e9d2f7cb5925876ed999a6c55dd"} Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520533 4727 scope.go:117] "RemoveContainer" containerID="68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520543 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.539475 4727 scope.go:117] "RemoveContainer" containerID="33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.558568 4727 scope.go:117] "RemoveContainer" containerID="38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.570654 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.578291 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:40 crc kubenswrapper[4727]: I0109 11:01:40.871312 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9820648-5736-44d6-a4de-d859613ca72a" path="/var/lib/kubelet/pods/d9820648-5736-44d6-a4de-d859613ca72a/volumes" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.762854 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763785 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763858 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763870 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763887 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763902 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763920 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763931 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763947 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763958 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763978 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763989 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764167 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764197 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.767240 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pht98" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.771238 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.772453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.775742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xjx8w" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.779688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.792803 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.793695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.797440 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kr5dj" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.818305 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.821405 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.830844 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.834883 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.840938 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fpg6q" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.851298 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.852167 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.859950 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2szkv" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.860344 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.865561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.892928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.893358 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.895596 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.896767 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.899075 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mv86g" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.914616 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.915682 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.924848 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925468 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t6tcr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925748 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925844 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.928422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r9zld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.948055 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.962592 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.967078 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.978578 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.979639 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.985028 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.985210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sftbs" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996349 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.005901 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.006806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.007856 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.009731 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bkcvd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.019189 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.023058 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9k9bz" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055643 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055742 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.083591 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.094491 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097550 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.103756 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.103817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:46.603801279 +0000 UTC m=+952.053706060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.107982 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.129602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.130705 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.138081 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xpxkv" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.142359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.159082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.162177 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.166008 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.179670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.180259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.192976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.194554 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.195701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.198230 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vshzg" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.199142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.199722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213492 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.226683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.238093 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.284936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.297331 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.301115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320908 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.356978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.360959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.374985 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.375455 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.390376 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.395353 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ldn2c" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.419987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.435862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.443850 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.444808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.450022 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tknwf" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463304 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463358 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.467921 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.469125 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.476223 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nw52s" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.491958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.505987 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.516876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.517192 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.518684 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.520916 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jb57n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.523182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.523248 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.527655 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.528247 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.530040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.532687 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4hsw8" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.550172 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.563344 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.564251 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.569543 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.570737 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.600300 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.603256 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.619587 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630701 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.655725 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.655802 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.655778663 +0000 UTC m=+953.105683444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.655847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.658041 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wdt6n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.678491 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.679548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.685886 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fxndl" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.686904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.689908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.702946 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.703944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.712717 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7kdz4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.733101 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.733148 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.233131489 +0000 UTC m=+952.683036270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.737204 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.746781 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.764593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.783027 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.801977 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.802445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.819088 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.820209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.833011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834898 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gggbj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.835086 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.835207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.864086 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.865905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.872047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.910026 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.911159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.919589 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jh84c" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.924359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.945994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.946158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.946474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.996930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.038021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.047972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048183 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048307 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048356 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.54833881 +0000 UTC m=+952.998243581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048701 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048716 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048725 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.548718371 +0000 UTC m=+952.998623152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.056913 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.070907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.080648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.104158 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.144582 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf57a8b19_1f94_4cc4_af28_f7c506f93de5.slice/crio-31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33 WatchSource:0}: Error finding container 31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33: Status 404 returned error can't find the container with id 31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33 Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.147005 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e494b5d_8aeb_47ed_b0a6_5e83b7f58bf6.slice/crio-4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be WatchSource:0}: Error finding container 4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be: Status 404 returned error can't find the container with id 4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.149063 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.198230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.250389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.250643 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.250713 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.250690325 +0000 UTC m=+953.700595106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.251198 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.558346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.558873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.558695 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559104 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.559087958 +0000 UTC m=+954.008992739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559047 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559633 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.559625313 +0000 UTC m=+954.009530094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.608567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" event={"ID":"f57a8b19-1f94-4cc4-af28-f7c506f93de5","Type":"ContainerStarted","Data":"31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33"} Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.609626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" event={"ID":"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6","Type":"ContainerStarted","Data":"4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be"} Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.663291 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.663619 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.663699 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:49.663680765 +0000 UTC m=+955.113585536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.706754 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.731406 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.755162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.763740 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.782026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.795322 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.798452 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c91cda_4264_401f_83de_20ddcf5f0d4d.slice/crio-ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55 WatchSource:0}: Error finding container ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55: Status 404 returned error can't find the container with id ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55 Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.803592 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.812564 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.818929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.828885 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4480343_1920_4926_8668_e47e5bbfb646.slice/crio-5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe WatchSource:0}: Error finding container 5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe: Status 404 returned error can't find the container with id 5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.959937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.981652 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.989211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.989872 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9300f2a9_97a8_4868_9485_8dd5d51df39e.slice/crio-7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6 WatchSource:0}: Error finding container 7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6: Status 404 returned error can't find the container with id 7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6 Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.995825 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.999111 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddfee9e4_1084_4750_ab19_473dde7a2fb6.slice/crio-04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da WatchSource:0}: Error finding container 04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da: Status 404 returned error can't find the container with id 04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.000970 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms2lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-jvkn5_openstack-operators(9300f2a9-97a8-4868-9485-8dd5d51df39e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.000985 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l6d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-cc8k9_openstack-operators(15c1d49b-c086-4c30-9a99-e0fb597dd76f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.002313 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.002493 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.002581 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.003271 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kww9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-6gtz5_openstack-operators(ddfee9e4-1084-4750-ab19-473dde7a2fb6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.004456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.008457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.011040 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab7e320_c116_4603_9aac_2e310be1b209.slice/crio-ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808 WatchSource:0}: Error finding container ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808: Status 404 returned error can't find the container with id ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808 Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.014197 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zg8xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-pnk72_openstack-operators(fab7e320-c116-4603-9aac-2e310be1b209): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.015309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.131015 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.134911 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba0be6cc_1e31_4421_aa33_1e2514069376.slice/crio-ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f WatchSource:0}: Error finding container ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f: Status 404 returned error can't find the container with id ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.151822 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.156841 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.167173 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc371fa9c_dd02_4673_99aa_4ec8fa8d9e07.slice/crio-964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723 WatchSource:0}: Error finding container 964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723: Status 404 returned error can't find the container with id 964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723 Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.173700 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rd8t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-x4r9z_openstack-operators(c371fa9c-dd02-4673-99aa-4ec8fa8d9e07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.175047 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.190668 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5399a2_4352_4013_9c26_a40e4bc815e3.slice/crio-046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc WatchSource:0}: Error finding container 046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc: Status 404 returned error can't find the container with id 046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.192121 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pth4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2m6mz_openstack-operators(ee5399a2-4352-4013-9c26-a40e4bc815e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.193983 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.274050 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.274302 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.274405 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.274380974 +0000 UTC m=+955.724285755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.578359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.578554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578637 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578685 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578782 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.578730609 +0000 UTC m=+956.028635390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.578808421 +0000 UTC m=+956.028713212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.625543 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" event={"ID":"15c1d49b-c086-4c30-9a99-e0fb597dd76f","Type":"ContainerStarted","Data":"a756cb36cb11b3b33c8108da3617daa79fd8928405734a0f8b9274b42ab599c5"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.628347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.629124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" event={"ID":"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07","Type":"ContainerStarted","Data":"964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.636129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" event={"ID":"e604d4a1-bf95-49df-a854-b15337b7fae7","Type":"ContainerStarted","Data":"e5a945f53cbd569d1611ccecf6d63a02ce59f5ade3fe1d9f687ebbb5eedc4d72"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.637131 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.641944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" event={"ID":"9891b17e-81f9-4999-b489-db3e162c2a54","Type":"ContainerStarted","Data":"aaba35acac5990b88021453d6173eb0cdf03cf7658472ecac5ab4fb85b091ffc"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.647724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" event={"ID":"ee5399a2-4352-4013-9c26-a40e4bc815e3","Type":"ContainerStarted","Data":"046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.649878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.650546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" event={"ID":"e3f94965-fce3-4e35-9f97-5047e05dd50a","Type":"ContainerStarted","Data":"721ed2ab96f86a54afb6fffb5e390165b5f9b68ef273933572b79e1a458625e6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.653297 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" event={"ID":"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb","Type":"ContainerStarted","Data":"24532db6f50a9696ff5f485e6ab155d385e9253ad98ea34448311a92e8dd6c05"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.664035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" event={"ID":"9300f2a9-97a8-4868-9485-8dd5d51df39e","Type":"ContainerStarted","Data":"7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.666268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" event={"ID":"51db22df-3d25-4c12-b104-eb3848940958","Type":"ContainerStarted","Data":"48743ec3f802836fe1d9cdd56b96cc1dbe5d84bb875d3d21e62e04b40d4a6f9f"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.667852 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.677366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" event={"ID":"848b9588-10d2-4bd4-bcc0-cccd55334c85","Type":"ContainerStarted","Data":"061118e73ac27746b69fb9b2f2017919f8c96781dd747f9fb14baa5fb2ab70b6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.697946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" event={"ID":"ddfee9e4-1084-4750-ab19-473dde7a2fb6","Type":"ContainerStarted","Data":"04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.702950 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.704779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" event={"ID":"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6","Type":"ContainerStarted","Data":"a67d5f210f9baf82a8f41f7e3259d08d199abed1c186da38111f6756b12f53d4"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.720055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" event={"ID":"ba0be6cc-1e31-4421-aa33-1e2514069376","Type":"ContainerStarted","Data":"ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.721949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" event={"ID":"63639485-2ddb-4983-921a-9de5dda98f0f","Type":"ContainerStarted","Data":"2d39ba517bfa72e25c5713e884408d015e8a01c6b0bfec670b9028cf641909fb"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.727093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" event={"ID":"e8c91cda-4264-401f-83de-20ddcf5f0d4d","Type":"ContainerStarted","Data":"ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.728454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" event={"ID":"6040cced-684e-4521-9c4e-1debba9d5320","Type":"ContainerStarted","Data":"ee1b87ead52e3b6aabff4bc3e39a72a3182bd005cab4e5e2537dd152b6281469"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.732288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" event={"ID":"e4480343-1920-4926-8668-e47e5bbfb646","Type":"ContainerStarted","Data":"5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.735848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" event={"ID":"fab7e320-c116-4603-9aac-2e310be1b209","Type":"ContainerStarted","Data":"ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.740831 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:49 crc kubenswrapper[4727]: I0109 11:01:49.718814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.718997 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.719341 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:53.719322432 +0000 UTC m=+959.169227203 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765779 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765858 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.766101 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.766164 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.774421 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.330396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.330817 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.330938 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.330918378 +0000 UTC m=+959.780823159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.636320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.636384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.636568 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.636653 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.636633433 +0000 UTC m=+960.086538214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.641583 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.641667 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.641649849 +0000 UTC m=+960.091554620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:53 crc kubenswrapper[4727]: I0109 11:01:53.790585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:53 crc kubenswrapper[4727]: E0109 11:01:53.790780 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:53 crc kubenswrapper[4727]: E0109 11:01:53.791214 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:01.791186783 +0000 UTC m=+967.241091604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.399587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.399771 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.399842 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.399825453 +0000 UTC m=+967.849730234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.703794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.703928 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.703994 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704064 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704086 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.704063526 +0000 UTC m=+968.153968307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704106 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.704095146 +0000 UTC m=+968.153999927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:02:01 crc kubenswrapper[4727]: I0109 11:02:01.821659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:01 crc kubenswrapper[4727]: E0109 11:02:01.821923 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:02:01 crc kubenswrapper[4727]: E0109 11:02:01.822310 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:17.822287116 +0000 UTC m=+983.272191977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.433427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.444087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.581477 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tknwf" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.589824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.738579 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.738653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738722 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:18.738773502 +0000 UTC m=+984.188678283 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738806 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738888 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:18.738867546 +0000 UTC m=+984.188772337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.073880 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.074202 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzdq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-s49vr_openstack-operators(9891b17e-81f9-4999-b489-db3e162c2a54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.075554 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podUID="9891b17e-81f9-4999-b489-db3e162c2a54" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.913478 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podUID="9891b17e-81f9-4999-b489-db3e162c2a54" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.756018 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.756712 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n754b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-vgcgj_openstack-operators(ba0be6cc-1e31-4421-aa33-1e2514069376): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.757953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podUID="ba0be6cc-1e31-4421-aa33-1e2514069376" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.925587 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podUID="ba0be6cc-1e31-4421-aa33-1e2514069376" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.349331 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.349562 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqhbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-4dv6h_openstack-operators(e604d4a1-bf95-49df-a854-b15337b7fae7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.350767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podUID="e604d4a1-bf95-49df-a854-b15337b7fae7" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.931879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podUID="e604d4a1-bf95-49df-a854-b15337b7fae7" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.084464 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.084780 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wfzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-gkkm4_openstack-operators(558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.085976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podUID="558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.605643 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.606030 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-g5ckd_openstack-operators(e4480343-1920-4926-8668-e47e5bbfb646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.607382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podUID="e4480343-1920-4926-8668-e47e5bbfb646" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.938277 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podUID="e4480343-1920-4926-8668-e47e5bbfb646" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.939858 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podUID="558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6" Jan 09 11:02:09 crc kubenswrapper[4727]: I0109 11:02:09.405396 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:02:09 crc kubenswrapper[4727]: I0109 11:02:09.405992 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.709237 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.709465 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7wwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-69kx5_openstack-operators(9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.710762 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podUID="9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.953531 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podUID="9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.922650 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.924168 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqv5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-4nzmw_openstack-operators(6040cced-684e-4521-9c4e-1debba9d5320): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.925958 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podUID="6040cced-684e-4521-9c4e-1debba9d5320" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.983527 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podUID="6040cced-684e-4521-9c4e-1debba9d5320" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.613537 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:02:14 crc kubenswrapper[4727]: W0109 11:02:14.617963 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3550e1cd_642e_481c_b98f_b6d3770f51ca.slice/crio-f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9 WatchSource:0}: Error finding container f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9: Status 404 returned error can't find the container with id f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9 Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.989805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" event={"ID":"ddfee9e4-1084-4750-ab19-473dde7a2fb6","Type":"ContainerStarted","Data":"c3d247fa40c5480d5aab2f1f6dc84b14a8b413ccd080e599d429378eb5874d1b"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.990860 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.992973 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" event={"ID":"9300f2a9-97a8-4868-9485-8dd5d51df39e","Type":"ContainerStarted","Data":"837b452b4285068a8e89566b704f01a147caf7696f203df4cf53ab1d6e29ff05"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.993179 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.994899 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" event={"ID":"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07","Type":"ContainerStarted","Data":"f8f3984e3e5f52173e77180f1dc930be0f613c15584dca1d15baa6d88cc21c50"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.995277 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.999789 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" event={"ID":"e8c91cda-4264-401f-83de-20ddcf5f0d4d","Type":"ContainerStarted","Data":"21f2277f2edb20274e26efd008a107ca81526a454e12ff5145af2a9690097ad4"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.999941 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.003985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" event={"ID":"f57a8b19-1f94-4cc4-af28-f7c506f93de5","Type":"ContainerStarted","Data":"7ecfaeef59a2104b98f4104ea8a3b4a99b2a7f24fb2c13b20200de0f393b99e4"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.004087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.005788 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" event={"ID":"63639485-2ddb-4983-921a-9de5dda98f0f","Type":"ContainerStarted","Data":"2509a2ab82303e8687651e9b58caeb210127f5593d47d45277da1ab313298b0c"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.005906 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.007646 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" event={"ID":"fab7e320-c116-4603-9aac-2e310be1b209","Type":"ContainerStarted","Data":"4ac20d6ec0be98bb89330d450d70b08ba3fd3d514ac3c38b707b4fd906d7bdb0"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.007840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.010286 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" event={"ID":"848b9588-10d2-4bd4-bcc0-cccd55334c85","Type":"ContainerStarted","Data":"182193dbafe400f7dfd00197e79fc65c62aad46d4dfe895f2fce0b1c20e4ed6b"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.010406 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.011806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" event={"ID":"ee5399a2-4352-4013-9c26-a40e4bc815e3","Type":"ContainerStarted","Data":"4e26075ecab307f19fe526a9072636d873a8255b9bbbd0d55e98e0c546e4f0f2"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.013144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" event={"ID":"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6","Type":"ContainerStarted","Data":"990b7a72654399e7381999e2234f4b40be0ca16785ef6ba06890f1b31b515731"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.013268 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.015126 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" event={"ID":"51db22df-3d25-4c12-b104-eb3848940958","Type":"ContainerStarted","Data":"f7fe7b15c14b3db0a8226c1cec8c84eb8af81f6087cf1426e3966b1a32427b56"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.015200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.016985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" event={"ID":"3550e1cd-642e-481c-b98f-b6d3770f51ca","Type":"ContainerStarted","Data":"f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.018867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" event={"ID":"15c1d49b-c086-4c30-9a99-e0fb597dd76f","Type":"ContainerStarted","Data":"33d876af7a50c0608fbd3a2db0ab29ad0768dd098d562c637ad1610ff6cecade"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.019642 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.021386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" event={"ID":"e3f94965-fce3-4e35-9f97-5047e05dd50a","Type":"ContainerStarted","Data":"dbddf10a9a6b7304b9ef6683524de2cc3a50b4bbf6286548158bf305bfcb35b9"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.021581 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.027435 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podStartSLOduration=3.611448848 podStartE2EDuration="30.027416193s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.003131759 +0000 UTC m=+953.453036540" lastFinishedPulling="2026-01-09 11:02:14.419099094 +0000 UTC m=+979.869003885" observedRunningTime="2026-01-09 11:02:15.020732198 +0000 UTC m=+980.470636999" watchObservedRunningTime="2026-01-09 11:02:15.027416193 +0000 UTC m=+980.477320984" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.037457 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podStartSLOduration=2.729889824 podStartE2EDuration="29.03744282s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.192018483 +0000 UTC m=+953.641923264" lastFinishedPulling="2026-01-09 11:02:14.499571479 +0000 UTC m=+979.949476260" observedRunningTime="2026-01-09 11:02:15.034753916 +0000 UTC m=+980.484658697" watchObservedRunningTime="2026-01-09 11:02:15.03744282 +0000 UTC m=+980.487347601" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.060197 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" podStartSLOduration=6.802890146 podStartE2EDuration="30.060172259s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.827912012 +0000 UTC m=+953.277816793" lastFinishedPulling="2026-01-09 11:02:11.085194125 +0000 UTC m=+976.535098906" observedRunningTime="2026-01-09 11:02:15.05657599 +0000 UTC m=+980.506480771" watchObservedRunningTime="2026-01-09 11:02:15.060172259 +0000 UTC m=+980.510077050" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.075821 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" podStartSLOduration=6.221994046 podStartE2EDuration="30.075788651s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.75585488 +0000 UTC m=+953.205759661" lastFinishedPulling="2026-01-09 11:02:11.609649495 +0000 UTC m=+977.059554266" observedRunningTime="2026-01-09 11:02:15.075575455 +0000 UTC m=+980.525480266" watchObservedRunningTime="2026-01-09 11:02:15.075788651 +0000 UTC m=+980.525693432" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.108574 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" podStartSLOduration=6.207488794 podStartE2EDuration="30.108554657s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.183241817 +0000 UTC m=+952.633146598" lastFinishedPulling="2026-01-09 11:02:11.08430768 +0000 UTC m=+976.534212461" observedRunningTime="2026-01-09 11:02:15.104852434 +0000 UTC m=+980.554757215" watchObservedRunningTime="2026-01-09 11:02:15.108554657 +0000 UTC m=+980.558459438" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.126976 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podStartSLOduration=2.921819235 podStartE2EDuration="29.126954775s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.173447024 +0000 UTC m=+953.623351805" lastFinishedPulling="2026-01-09 11:02:14.378582554 +0000 UTC m=+979.828487345" observedRunningTime="2026-01-09 11:02:15.122947554 +0000 UTC m=+980.572852335" watchObservedRunningTime="2026-01-09 11:02:15.126954775 +0000 UTC m=+980.576859556" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.171059 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" podStartSLOduration=6.268866692 podStartE2EDuration="30.171035695s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.182906227 +0000 UTC m=+952.632811008" lastFinishedPulling="2026-01-09 11:02:11.08507523 +0000 UTC m=+976.534980011" observedRunningTime="2026-01-09 11:02:15.169552344 +0000 UTC m=+980.619457125" watchObservedRunningTime="2026-01-09 11:02:15.171035695 +0000 UTC m=+980.620940476" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.204442 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" podStartSLOduration=6.922750607 podStartE2EDuration="30.204424188s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.803129172 +0000 UTC m=+953.253033953" lastFinishedPulling="2026-01-09 11:02:11.084802743 +0000 UTC m=+976.534707534" observedRunningTime="2026-01-09 11:02:15.201762054 +0000 UTC m=+980.651666825" watchObservedRunningTime="2026-01-09 11:02:15.204424188 +0000 UTC m=+980.654328969" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.241329 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podStartSLOduration=2.852279933 podStartE2EDuration="29.241303517s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.014069257 +0000 UTC m=+953.463974038" lastFinishedPulling="2026-01-09 11:02:14.403092841 +0000 UTC m=+979.852997622" observedRunningTime="2026-01-09 11:02:15.230081817 +0000 UTC m=+980.679986598" watchObservedRunningTime="2026-01-09 11:02:15.241303517 +0000 UTC m=+980.691208298" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.287111 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" podStartSLOduration=7.009990285 podStartE2EDuration="30.287091124s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.807333174 +0000 UTC m=+953.257237955" lastFinishedPulling="2026-01-09 11:02:11.084434013 +0000 UTC m=+976.534338794" observedRunningTime="2026-01-09 11:02:15.285851919 +0000 UTC m=+980.735756700" watchObservedRunningTime="2026-01-09 11:02:15.287091124 +0000 UTC m=+980.736995905" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.322414 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podStartSLOduration=2.919980406 podStartE2EDuration="29.3223969s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.00077184 +0000 UTC m=+953.450676621" lastFinishedPulling="2026-01-09 11:02:14.403188294 +0000 UTC m=+979.853093115" observedRunningTime="2026-01-09 11:02:15.321816083 +0000 UTC m=+980.771720864" watchObservedRunningTime="2026-01-09 11:02:15.3223969 +0000 UTC m=+980.772301681" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.380910 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podStartSLOduration=3.058557158 podStartE2EDuration="29.380891417s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.000825462 +0000 UTC m=+953.450730253" lastFinishedPulling="2026-01-09 11:02:14.323159721 +0000 UTC m=+979.773064512" observedRunningTime="2026-01-09 11:02:15.34591393 +0000 UTC m=+980.795818711" watchObservedRunningTime="2026-01-09 11:02:15.380891417 +0000 UTC m=+980.830796198" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.382741 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" podStartSLOduration=6.287938163 podStartE2EDuration="29.382735238s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.990345098 +0000 UTC m=+953.440249879" lastFinishedPulling="2026-01-09 11:02:11.085142173 +0000 UTC m=+976.535046954" observedRunningTime="2026-01-09 11:02:15.377171164 +0000 UTC m=+980.827075945" watchObservedRunningTime="2026-01-09 11:02:15.382735238 +0000 UTC m=+980.832640019" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.036279 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" event={"ID":"9891b17e-81f9-4999-b489-db3e162c2a54","Type":"ContainerStarted","Data":"8f59f0c3e933c8f852e9647e86331578418d50c1827cb229b7c03afeea08d62c"} Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.038041 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.881693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podStartSLOduration=3.905260009 podStartE2EDuration="32.881672082s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.725201611 +0000 UTC m=+953.175106392" lastFinishedPulling="2026-01-09 11:02:16.701613684 +0000 UTC m=+982.151518465" observedRunningTime="2026-01-09 11:02:17.067594133 +0000 UTC m=+982.517498914" watchObservedRunningTime="2026-01-09 11:02:17.881672082 +0000 UTC m=+983.331576873" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.897489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.908065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.048586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" event={"ID":"3550e1cd-642e-481c-b98f-b6d3770f51ca","Type":"ContainerStarted","Data":"176b427ab1bbe503ec8d4f662bacd76c4d8b2733cf8ed78cc4af43c0b1998af1"} Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.050170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.067071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t6tcr" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.075729 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.089026 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" podStartSLOduration=29.061703673 podStartE2EDuration="32.089007215s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:02:14.624128053 +0000 UTC m=+980.074032834" lastFinishedPulling="2026-01-09 11:02:17.651431595 +0000 UTC m=+983.101336376" observedRunningTime="2026-01-09 11:02:18.08883797 +0000 UTC m=+983.538742751" watchObservedRunningTime="2026-01-09 11:02:18.089007215 +0000 UTC m=+983.538911996" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.539295 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.810594 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.810644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.818005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.818180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.004250 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gggbj" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.013192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.056288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" event={"ID":"e604d4a1-bf95-49df-a854-b15337b7fae7","Type":"ContainerStarted","Data":"81771aa716668ef5ba88db6676231eb7e72ec8697e6f08cf9a9a61793ff0dbb2"} Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.057431 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.062392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" event={"ID":"24886819-7c1f-4b1f-880e-4b2102e302c1","Type":"ContainerStarted","Data":"18f78ea8379c2449ff62c5cd9a9a4de60691782579634f6457ddd88f7c34be6d"} Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.477319 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podStartSLOduration=3.783541357 podStartE2EDuration="34.477298019s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.823013409 +0000 UTC m=+953.272918190" lastFinishedPulling="2026-01-09 11:02:18.516770081 +0000 UTC m=+983.966674852" observedRunningTime="2026-01-09 11:02:19.08528512 +0000 UTC m=+984.535189921" watchObservedRunningTime="2026-01-09 11:02:19.477298019 +0000 UTC m=+984.927202800" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.478868 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.071900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" event={"ID":"6a33b307-e521-43c4-8e35-3e9d7d553716","Type":"ContainerStarted","Data":"0b429df8ed511c16a0f1a349427cefc68ef2a4ab2fa575b4f95209112ea894c0"} Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.072244 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" event={"ID":"6a33b307-e521-43c4-8e35-3e9d7d553716","Type":"ContainerStarted","Data":"5da208dc0f252da6a2e9ae95b7b97d7889578afb6cd4fe5ef253add35b9455d5"} Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.110248 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" podStartSLOduration=34.110231239 podStartE2EDuration="34.110231239s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:02:20.108036409 +0000 UTC m=+985.557941180" watchObservedRunningTime="2026-01-09 11:02:20.110231239 +0000 UTC m=+985.560136010" Jan 09 11:02:21 crc kubenswrapper[4727]: I0109 11:02:21.080220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.087140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" event={"ID":"ba0be6cc-1e31-4421-aa33-1e2514069376","Type":"ContainerStarted","Data":"6b5bcce0a79d6a4f3d562697da3f385fee39658fd68d725280dd781bdacd850c"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.087814 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.088422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" event={"ID":"e4480343-1920-4926-8668-e47e5bbfb646","Type":"ContainerStarted","Data":"f1a372193e9da56d2fdbf199a6a845da49f4056b4caa86ce6b07e3f746f334a7"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.088653 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.093707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" event={"ID":"24886819-7c1f-4b1f-880e-4b2102e302c1","Type":"ContainerStarted","Data":"19c30ade5dff3793b2521850d69aab19dc0234e46fbace8af746c2681c61b9ba"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.093767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.097622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" event={"ID":"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6","Type":"ContainerStarted","Data":"351434f3caec0b44b429eb306e6ee454c84aba995141914e002a152dc3c541fd"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.098157 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.118262 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podStartSLOduration=3.240084216 podStartE2EDuration="36.11824583s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.137577292 +0000 UTC m=+953.587482073" lastFinishedPulling="2026-01-09 11:02:21.015738906 +0000 UTC m=+986.465643687" observedRunningTime="2026-01-09 11:02:22.108015327 +0000 UTC m=+987.557920108" watchObservedRunningTime="2026-01-09 11:02:22.11824583 +0000 UTC m=+987.568150611" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.146905 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podStartSLOduration=3.693131075 podStartE2EDuration="37.146887802s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.835128342 +0000 UTC m=+953.285033123" lastFinishedPulling="2026-01-09 11:02:21.288885069 +0000 UTC m=+986.738789850" observedRunningTime="2026-01-09 11:02:22.137114452 +0000 UTC m=+987.587019253" watchObservedRunningTime="2026-01-09 11:02:22.146887802 +0000 UTC m=+987.596792583" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.177414 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podStartSLOduration=3.151735398 podStartE2EDuration="36.177392656s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.988764002 +0000 UTC m=+953.438668783" lastFinishedPulling="2026-01-09 11:02:21.01442126 +0000 UTC m=+986.464326041" observedRunningTime="2026-01-09 11:02:22.171906384 +0000 UTC m=+987.621811155" watchObservedRunningTime="2026-01-09 11:02:22.177392656 +0000 UTC m=+987.627297437" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.207911 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" podStartSLOduration=34.736313981 podStartE2EDuration="37.207888728s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:02:18.54310356 +0000 UTC m=+983.993008351" lastFinishedPulling="2026-01-09 11:02:21.014678317 +0000 UTC m=+986.464583098" observedRunningTime="2026-01-09 11:02:22.202196451 +0000 UTC m=+987.652101232" watchObservedRunningTime="2026-01-09 11:02:22.207888728 +0000 UTC m=+987.657793509" Jan 09 11:02:23 crc kubenswrapper[4727]: I0109 11:02:23.108420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" event={"ID":"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb","Type":"ContainerStarted","Data":"a711ad6bab6c54a737576f72d1ec1085cb6c7f771cb444a53b455050e8c716d9"} Jan 09 11:02:23 crc kubenswrapper[4727]: I0109 11:02:23.130187 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podStartSLOduration=4.05409574 podStartE2EDuration="38.130153448s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.810107245 +0000 UTC m=+953.260012016" lastFinishedPulling="2026-01-09 11:02:21.886164933 +0000 UTC m=+987.336069724" observedRunningTime="2026-01-09 11:02:23.127592447 +0000 UTC m=+988.577497288" watchObservedRunningTime="2026-01-09 11:02:23.130153448 +0000 UTC m=+988.580058299" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.100426 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.111156 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.169433 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.186066 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.234103 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.300901 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.423247 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.440120 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.520560 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.529489 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.573168 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.659834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.758409 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.876578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.999791 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.040875 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.060209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.143422 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:02:28 crc kubenswrapper[4727]: I0109 11:02:28.083779 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:29 crc kubenswrapper[4727]: I0109 11:02:29.022039 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:32 crc kubenswrapper[4727]: I0109 11:02:32.599048 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:36 crc kubenswrapper[4727]: I0109 11:02:36.532726 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:02:37 crc kubenswrapper[4727]: I0109 11:02:37.255604 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" event={"ID":"6040cced-684e-4521-9c4e-1debba9d5320","Type":"ContainerStarted","Data":"3f62e299c7603dd3e8592f12f4010be57384773e8b59fd1fcab1aeebc6ae6723"} Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.269252 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.286820 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podStartSLOduration=12.611787383 podStartE2EDuration="54.286795938s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.736493618 +0000 UTC m=+953.186398399" lastFinishedPulling="2026-01-09 11:02:29.411502163 +0000 UTC m=+994.861406954" observedRunningTime="2026-01-09 11:02:39.28358796 +0000 UTC m=+1004.733492771" watchObservedRunningTime="2026-01-09 11:02:39.286795938 +0000 UTC m=+1004.736700719" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404660 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404726 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.405591 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.405665 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" gracePeriod=600 Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304542 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" exitCode=0 Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304940 4727 scope.go:117] "RemoveContainer" containerID="0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" Jan 09 11:02:44 crc kubenswrapper[4727]: I0109 11:02:44.314420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} Jan 09 11:02:46 crc kubenswrapper[4727]: I0109 11:02:46.305399 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.669248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.671190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678855 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678913 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-khtmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678920 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.679027 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.705852 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.739539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.739652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.753697 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.755364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.765827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.810365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842203 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842366 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.843348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.894651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944207 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.945632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.946157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.966787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.007117 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.075640 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.400890 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:04 crc kubenswrapper[4727]: W0109 11:03:04.404866 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4792247f_ae97_41bf_955e_9b16eea098e2.slice/crio-3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839 WatchSource:0}: Error finding container 3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839: Status 404 returned error can't find the container with id 3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839 Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.408234 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.496179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" event={"ID":"4792247f-ae97-41bf-955e-9b16eea098e2","Type":"ContainerStarted","Data":"3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839"} Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.508827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:05 crc kubenswrapper[4727]: I0109 11:03:05.507336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" event={"ID":"998815fa-e774-44a2-ade3-1409ceee0b03","Type":"ContainerStarted","Data":"4c6ac55e742436a968c5cf0430e0e38d89af0c0bf3ea5c9361ba02a939fdb8f2"} Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.407510 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.439470 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.440951 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.454214 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.500864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.501029 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.501092 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603128 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.604478 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.608154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.647867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.733032 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.763023 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.775432 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.776992 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.859698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.911574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.912127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.912184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.018073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.019121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.061748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.206593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.296487 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:07 crc kubenswrapper[4727]: W0109 11:03:07.316300 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd88b93c8_236e_4b94_bd57_1e0259dd748e.slice/crio-9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240 WatchSource:0}: Error finding container 9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240: Status 404 returned error can't find the container with id 9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240 Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.540632 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerStarted","Data":"9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240"} Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.586903 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.588330 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.590656 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591037 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xx2j9" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591198 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591202 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591353 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.602616 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.614529 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629965 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630271 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630292 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.738409 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.743287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.744356 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.745335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.745783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.746474 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.746792 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.755413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.757676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.759344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.763277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: W0109 11:03:07.763309 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a8626c4_f062_47b5_b8f6_f83b93195735.slice/crio-96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f WatchSource:0}: Error finding container 96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f: Status 404 returned error can't find the container with id 96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.782463 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.804845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.918792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.944232 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.946052 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.948754 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.949004 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.949118 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.952657 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.953019 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.953268 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j7rc6" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.954502 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.966065 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050162 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050258 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050319 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050401 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154374 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.155523 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157316 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.163112 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.167488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.171482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.178300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.185265 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.193639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.286624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.568348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" event={"ID":"8a8626c4-f062-47b5-b8f6-f83b93195735","Type":"ContainerStarted","Data":"96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f"} Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.589993 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.896153 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: W0109 11:03:08.943589 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6a64ec_e743_4fa7_8e3e_5f628ebeea60.slice/crio-db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75 WatchSource:0}: Error finding container db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75: Status 404 returned error can't find the container with id db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75 Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.232847 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.234641 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.246679 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.247070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.247344 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.249021 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zwcdt" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.251900 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.257158 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336665 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.337056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.337140 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439111 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.440727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439701 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.441650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.441896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.442242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.465054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.471283 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.471655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.481813 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.574077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.632156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75"} Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.634942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74"} Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.586423 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.588032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.594660 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595062 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gk6mh" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595152 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595348 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.630618 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.681899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682004 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785400 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785629 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.786229 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.786345 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.787905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.787948 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.797443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.799396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.799743 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.811660 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.816956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.895108 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.896842 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.906145 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.906737 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fvrvm" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.907746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.913929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.919229 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989945 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.990008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091942 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.092007 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.092028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.095429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.096140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.096416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.114984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.123069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.219378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.984391 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.988746 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.992200 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cpqd8" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.997093 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.025106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.127536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.145871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.313713 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.975763 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.978027 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981132 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tvwgr" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981602 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981600 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.990221 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.026462 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.028142 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.038771 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088486 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088563 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088610 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088628 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088900 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190851 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191140 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192292 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.193428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.194735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.200110 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.206011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.211326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.225437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.296903 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.345799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.930916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.933624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936491 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936735 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936894 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pqthl" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.937046 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.937172 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.952115 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114196 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114270 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114392 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.217592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.218369 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.219386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.219824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.223024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.224169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.224468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.239876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.246888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.258556 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.412871 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.415224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418530 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418587 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418902 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.419080 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9m8qm" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.434199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582483 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582645 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.684890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.684988 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685498 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686107 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686527 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.702201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.710204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.713161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.720979 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.737825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.755657 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.355889 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.356977 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvvnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bwls8_openstack(998815fa-e774-44a2-ade3-1409ceee0b03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.358206 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" podUID="998815fa-e774-44a2-ade3-1409ceee0b03" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.409582 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.409844 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k9rmq_openstack(4792247f-ae97-41bf-955e-9b16eea098e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.411036 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" podUID="4792247f-ae97-41bf-955e-9b16eea098e2" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.426045 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.426197 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfc9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-pdq66_openstack(d88b93c8-236e-4b94-bd57-1e0259dd748e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.427490 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.446912 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.447072 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc4qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6r876_openstack(8a8626c4-f062-47b5-b8f6-f83b93195735): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.448853 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.818620 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.818682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" Jan 09 11:03:29 crc kubenswrapper[4727]: I0109 11:03:29.843229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:29 crc kubenswrapper[4727]: W0109 11:03:29.995655 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398bfc2d_be02_491c_af23_69fc4fc24817.slice/crio-771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7 WatchSource:0}: Error finding container 771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7: Status 404 returned error can't find the container with id 771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7 Jan 09 11:03:29 crc kubenswrapper[4727]: I0109 11:03:29.999104 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.001348 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26965ac2_3dab_452c_8a34_83eadab4b929.slice/crio-049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc WatchSource:0}: Error finding container 049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc: Status 404 returned error can't find the container with id 049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.013388 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.020310 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.374727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.472965 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.644801 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd81594ff_04f5_47c2_9620_db583609e9aa.slice/crio-d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6 WatchSource:0}: Error finding container d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6: Status 404 returned error can't find the container with id d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6 Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.649760 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e25e0da_05c1_4d2e_8e27_c795be192a77.slice/crio-569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818 WatchSource:0}: Error finding container 569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818: Status 404 returned error can't find the container with id 569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818 Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.710757 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.717850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.787555 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"998815fa-e774-44a2-ade3-1409ceee0b03\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.787712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"998815fa-e774-44a2-ade3-1409ceee0b03\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.788452 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config" (OuterVolumeSpecName: "config") pod "998815fa-e774-44a2-ade3-1409ceee0b03" (UID: "998815fa-e774-44a2-ade3-1409ceee0b03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.795418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh" (OuterVolumeSpecName: "kube-api-access-kvvnh") pod "998815fa-e774-44a2-ade3-1409ceee0b03" (UID: "998815fa-e774-44a2-ade3-1409ceee0b03"). InnerVolumeSpecName "kube-api-access-kvvnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.826950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.829784 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerStarted","Data":"049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.832599 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.835192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" event={"ID":"998815fa-e774-44a2-ade3-1409ceee0b03","Type":"ContainerDied","Data":"4c6ac55e742436a968c5cf0430e0e38d89af0c0bf3ea5c9361ba02a939fdb8f2"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.835332 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.836408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0e6e8606-58f3-4640-939b-afa25ce1ce03","Type":"ContainerStarted","Data":"c28a596d903243891199eda04e647131ee1feb3b54a523f604ee927a4279baab"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.838649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"7a7eb66b883e9a0a66bca284dd6fdfd311fc36e281015acef7c1c8b13abe892e"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.841001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.843798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.845975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" event={"ID":"4792247f-ae97-41bf-955e-9b16eea098e2","Type":"ContainerDied","Data":"3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.846025 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.847666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2" event={"ID":"d81594ff-04f5-47c2-9620-db583609e9aa","Type":"ContainerStarted","Data":"d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889630 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889826 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890311 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890339 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config" (OuterVolumeSpecName: "config") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890843 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.898563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns" (OuterVolumeSpecName: "kube-api-access-dvgns") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "kube-api-access-dvgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.969615 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.980972 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998409 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998464 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998479 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.074211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.215286 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.221178 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.502585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:31 crc kubenswrapper[4727]: W0109 11:03:31.737563 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a92393f_3fc8_4570_9e2f_b3aed9ce9bb8.slice/crio-8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee WatchSource:0}: Error finding container 8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee: Status 404 returned error can't find the container with id 8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.858424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"e91831c33a7ef81519243790ea5b18c65641460da17a60921162046cdb477acb"} Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.860376 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee"} Jan 09 11:03:32 crc kubenswrapper[4727]: I0109 11:03:32.870055 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4792247f-ae97-41bf-955e-9b16eea098e2" path="/var/lib/kubelet/pods/4792247f-ae97-41bf-955e-9b16eea098e2/volumes" Jan 09 11:03:32 crc kubenswrapper[4727]: I0109 11:03:32.870746 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998815fa-e774-44a2-ade3-1409ceee0b03" path="/var/lib/kubelet/pods/998815fa-e774-44a2-ade3-1409ceee0b03/volumes" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.931616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"e1769745eee41b35a446a104041934be22bb24b754f2896fc7c445fd568054e2"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.934848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2" event={"ID":"d81594ff-04f5-47c2-9620-db583609e9aa","Type":"ContainerStarted","Data":"004ec23cffea5ee515e7291ccd33b721de72dc39a2d9da6a1931ce3e71ff33db"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.935075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.936742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.941186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0e6e8606-58f3-4640-939b-afa25ce1ce03","Type":"ContainerStarted","Data":"77fd03ab99813bf8ec1e830cd1b50448330e8ea8c1acdb09a9a2bb373218ca07"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.941352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.942999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerStarted","Data":"aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.943477 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.945557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"dccee653b0e4ca3fc20dbc10644eb1a9b2f8f30642a17240aab9cec37d536871"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.947472 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.949194 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.966729 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwrp2" podStartSLOduration=16.245137619 podStartE2EDuration="22.966702706s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.651218504 +0000 UTC m=+1056.101123285" lastFinishedPulling="2026-01-09 11:03:37.372783591 +0000 UTC m=+1062.822688372" observedRunningTime="2026-01-09 11:03:37.95816868 +0000 UTC m=+1063.408073461" watchObservedRunningTime="2026-01-09 11:03:37.966702706 +0000 UTC m=+1063.416607497" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.072204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.608249926 podStartE2EDuration="26.072101238s" podCreationTimestamp="2026-01-09 11:03:12 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.003114496 +0000 UTC m=+1055.453019277" lastFinishedPulling="2026-01-09 11:03:37.466965808 +0000 UTC m=+1062.916870589" observedRunningTime="2026-01-09 11:03:38.06822699 +0000 UTC m=+1063.518131781" watchObservedRunningTime="2026-01-09 11:03:38.072101238 +0000 UTC m=+1063.522006019" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.090849 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.151904875 podStartE2EDuration="28.090817422s" podCreationTimestamp="2026-01-09 11:03:10 +0000 UTC" firstStartedPulling="2026-01-09 11:03:29.832769178 +0000 UTC m=+1055.282673959" lastFinishedPulling="2026-01-09 11:03:31.771681735 +0000 UTC m=+1057.221586506" observedRunningTime="2026-01-09 11:03:38.090417392 +0000 UTC m=+1063.540322203" watchObservedRunningTime="2026-01-09 11:03:38.090817422 +0000 UTC m=+1063.540722203" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.966067 4727 generic.go:334] "Generic (PLEG): container finished" podID="bdf6d307-98f2-40a7-8b6c-c149789150ef" containerID="e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470" exitCode=0 Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.966283 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerDied","Data":"e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470"} Jan 09 11:03:39 crc kubenswrapper[4727]: I0109 11:03:39.977566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"9ef24e3a77bb83a46b565e29bfc907ae65d435ed7a5de1f688ae8c9dcb457a5c"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.987447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"90608d469bcea6c32f9b76c5aa0b01b635a995a6de7e929fed500e416e8d8fe6"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.992668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"5de25bef9e2800edd8fe3384498eb106a5fa2fff29330d377e15ed57c1998c58"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.993372 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.993416 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.999339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"258e27329ff44bb1e17ff8596d3a60b380eaad82950f1b0fbe95791c83c6ef15"} Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.044784 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.22824275 podStartE2EDuration="26.044765519s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.652335781 +0000 UTC m=+1056.102240562" lastFinishedPulling="2026-01-09 11:03:40.46885855 +0000 UTC m=+1065.918763331" observedRunningTime="2026-01-09 11:03:41.042327067 +0000 UTC m=+1066.492231848" watchObservedRunningTime="2026-01-09 11:03:41.044765519 +0000 UTC m=+1066.494670300" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.126999 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wxljq" podStartSLOduration=21.103752193 podStartE2EDuration="26.126970292s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:31.736839472 +0000 UTC m=+1057.186744253" lastFinishedPulling="2026-01-09 11:03:36.760057561 +0000 UTC m=+1062.209962352" observedRunningTime="2026-01-09 11:03:41.108410092 +0000 UTC m=+1066.558314873" watchObservedRunningTime="2026-01-09 11:03:41.126970292 +0000 UTC m=+1066.576875073" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.159415 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.437324448 podStartE2EDuration="22.159398215s" podCreationTimestamp="2026-01-09 11:03:19 +0000 UTC" firstStartedPulling="2026-01-09 11:03:31.765338724 +0000 UTC m=+1057.215243505" lastFinishedPulling="2026-01-09 11:03:40.487412501 +0000 UTC m=+1065.937317272" observedRunningTime="2026-01-09 11:03:41.1371051 +0000 UTC m=+1066.587009891" watchObservedRunningTime="2026-01-09 11:03:41.159398215 +0000 UTC m=+1066.609302996" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.258993 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.298942 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.757341 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.851055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.008895 4727 generic.go:334] "Generic (PLEG): container finished" podID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.009067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.013974 4727 generic.go:334] "Generic (PLEG): container finished" podID="e90a87ab-2df7-4a4a-8854-6daf3322e3d1" containerID="749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.014201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerDied","Data":"749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.017863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerDied","Data":"d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.024660 4727 generic.go:334] "Generic (PLEG): container finished" podID="398bfc2d-be02-491c-af23-69fc4fc24817" containerID="d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.026728 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.028245 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.084634 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.096071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.375906 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.442997 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.445274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.448097 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.461101 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.462536 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.469895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.474095 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.490721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.531800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532545 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.533116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.628741 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635875 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635921 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.637269 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.638951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.640228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.640250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.641454 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.642382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.642911 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644009 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644321 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-x2fhd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.645968 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.679261 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.682291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.700871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.715744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.717132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.720158 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.739515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.741017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.741065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.786615 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.791083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845540 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.846386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865684 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865674 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.867954 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.872033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.872735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.877545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.895532 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.921207 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.988387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.991854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.992114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.993392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.031821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.040850 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" event={"ID":"8a8626c4-f062-47b5-b8f6-f83b93195735","Type":"ContainerDied","Data":"96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f"} Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.040934 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.043037 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"3e935267903d5c3555fc3eab5aa6d0d5b08d129dace48ea060558aed0d5213c7"} Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.088930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.091639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.092010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.092322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.097533 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.260646132 podStartE2EDuration="34.097502412s" podCreationTimestamp="2026-01-09 11:03:09 +0000 UTC" firstStartedPulling="2026-01-09 11:03:29.999306809 +0000 UTC m=+1055.449211590" lastFinishedPulling="2026-01-09 11:03:36.836163089 +0000 UTC m=+1062.286067870" observedRunningTime="2026-01-09 11:03:43.074988141 +0000 UTC m=+1068.524892932" watchObservedRunningTime="2026-01-09 11:03:43.097502412 +0000 UTC m=+1068.547407193" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.099265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.099406 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq" (OuterVolumeSpecName: "kube-api-access-wc4qq") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "kube-api-access-wc4qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.102964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config" (OuterVolumeSpecName: "config") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.108277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195604 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195645 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195657 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.323605 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.418074 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.452255 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.469164 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.483474 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.574048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:43 crc kubenswrapper[4727]: W0109 11:03:43.581556 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5504697e_8969_45f2_92c6_3aba8688de1a.slice/crio-beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e WatchSource:0}: Error finding container beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e: Status 404 returned error can't find the container with id beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.650845 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.068051 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"519f8f5d5e7190352f37ebd7a547601e4ce345d0b63e6063379a577b0ca68c2c"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070290 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" exitCode=0 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerStarted","Data":"90625e00836e35ec42870d8838b1bad64246fb7214b4d03011fe48a0e3903723"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075172 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerStarted","Data":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075294 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" containerID="cri-o://1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" gracePeriod=10 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079455 4727 generic.go:334] "Generic (PLEG): container finished" podID="9af0367c-139f-443d-9b2b-54908e88f39c" containerID="87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731" exitCode=0 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerStarted","Data":"64f12a2e916ba0978736fe5fcb0ce8bed71a92aea02c9a9e4d93c6d88a07c4ec"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.084336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.102711 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p58fw" event={"ID":"ede60be2-7d1e-482a-b994-6c552d322575","Type":"ContainerStarted","Data":"fd04a5259a42e7ccf2db63769c37b680ef294f6d19c1a3d3a3d60d891336297b"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.102770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p58fw" event={"ID":"ede60be2-7d1e-482a-b994-6c552d322575","Type":"ContainerStarted","Data":"8254e732664e97894563db5de08c5e1bf27ade4792397799ae16e934251edc03"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.120777 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.278665797 podStartE2EDuration="36.120754369s" podCreationTimestamp="2026-01-09 11:03:08 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.000227923 +0000 UTC m=+1055.450132714" lastFinishedPulling="2026-01-09 11:03:36.842316505 +0000 UTC m=+1062.292221286" observedRunningTime="2026-01-09 11:03:44.096591356 +0000 UTC m=+1069.546496137" watchObservedRunningTime="2026-01-09 11:03:44.120754369 +0000 UTC m=+1069.570659150" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.154359 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podStartSLOduration=4.085978235 podStartE2EDuration="38.154296029s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:07.322210139 +0000 UTC m=+1032.772114920" lastFinishedPulling="2026-01-09 11:03:41.390527933 +0000 UTC m=+1066.840432714" observedRunningTime="2026-01-09 11:03:44.143128826 +0000 UTC m=+1069.593033607" watchObservedRunningTime="2026-01-09 11:03:44.154296029 +0000 UTC m=+1069.604200820" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.190122 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p58fw" podStartSLOduration=2.190101336 podStartE2EDuration="2.190101336s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:44.188365043 +0000 UTC m=+1069.638269834" watchObservedRunningTime="2026-01-09 11:03:44.190101336 +0000 UTC m=+1069.640006137" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.632962 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.646925 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.652540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.652623 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.664274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p" (OuterVolumeSpecName: "kube-api-access-tfc9p") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "kube-api-access-tfc9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.703670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.733890 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config" (OuterVolumeSpecName: "config") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.755944 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.755978 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.756109 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.871237 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" path="/var/lib/kubelet/pods/8a8626c4-f062-47b5-b8f6-f83b93195735/volumes" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"ab66be79242c993625da55e2412401fde94d26b34f5a3f862a677921b506bf5f"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113475 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"e80ab5a9c392c933470e5688d54439ed9d23fc14b01ea81f8dfd12319f0d8058"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113544 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.116224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerStarted","Data":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.116472 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118807 4727 generic.go:334] "Generic (PLEG): container finished" podID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" exitCode=0 Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118910 4727 scope.go:117] "RemoveContainer" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.119049 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.122495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerStarted","Data":"c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.137789 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.038005541 podStartE2EDuration="3.137767238s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="2026-01-09 11:03:43.583928201 +0000 UTC m=+1069.033832982" lastFinishedPulling="2026-01-09 11:03:44.683689898 +0000 UTC m=+1070.133594679" observedRunningTime="2026-01-09 11:03:45.134456935 +0000 UTC m=+1070.584361716" watchObservedRunningTime="2026-01-09 11:03:45.137767238 +0000 UTC m=+1070.587672019" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.147004 4727 scope.go:117] "RemoveContainer" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.168779 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podStartSLOduration=3.168759484 podStartE2EDuration="3.168759484s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:45.159102379 +0000 UTC m=+1070.609007200" watchObservedRunningTime="2026-01-09 11:03:45.168759484 +0000 UTC m=+1070.618664265" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.178623 4727 scope.go:117] "RemoveContainer" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: E0109 11:03:45.179117 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": container with ID starting with 1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5 not found: ID does not exist" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179187 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} err="failed to get container status \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": rpc error: code = NotFound desc = could not find container \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": container with ID starting with 1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5 not found: ID does not exist" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179220 4727 scope.go:117] "RemoveContainer" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: E0109 11:03:45.179616 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": container with ID starting with 58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc not found: ID does not exist" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179689 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc"} err="failed to get container status \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": rpc error: code = NotFound desc = could not find container \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": container with ID starting with 58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc not found: ID does not exist" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.190873 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.203065 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.205371 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" podStartSLOduration=3.205347271 podStartE2EDuration="3.205347271s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:45.200000565 +0000 UTC m=+1070.649905346" watchObservedRunningTime="2026-01-09 11:03:45.205347271 +0000 UTC m=+1070.655252052" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.133282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.221717 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.875195 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" path="/var/lib/kubelet/pods/d88b93c8-236e-4b94-bd57-1e0259dd748e/volumes" Jan 09 11:03:49 crc kubenswrapper[4727]: I0109 11:03:49.574393 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 09 11:03:49 crc kubenswrapper[4727]: I0109 11:03:49.574845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 09 11:03:50 crc kubenswrapper[4727]: I0109 11:03:50.920379 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:50 crc kubenswrapper[4727]: I0109 11:03:50.920487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:52 crc kubenswrapper[4727]: I0109 11:03:52.790120 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.014565 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.014842 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" containerID="cri-o://c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" gracePeriod=10 Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.015730 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.070550 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: E0109 11:03:53.072622 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072655 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: E0109 11:03:53.072702 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="init" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072710 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="init" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072982 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.074248 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.089460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.109563 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227349 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227687 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330750 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.352351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.393962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.912316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: W0109 11:03:53.915661 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72decd78_911c_43ff_9f4e_0d99d71cf84b.slice/crio-ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db WatchSource:0}: Error finding container ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db: Status 404 returned error can't find the container with id ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.112357 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.120760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.125906 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.126688 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ql8vj" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.127450 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.127804 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.163227 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.210022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db"} Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247909 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.248155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.248246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.350684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.350928 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351047 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351121 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351236 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:54.85119943 +0000 UTC m=+1080.301104251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351461 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351795 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351990 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.352352 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.400608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.412682 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.451551 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.452756 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.455499 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.455617 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.463353 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.467862 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555743 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555966 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.556136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.556277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657790 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.676681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.799619 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.860980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861361 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861400 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861475 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:55.861447434 +0000 UTC m=+1081.311352215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.221208 4727 generic.go:334] "Generic (PLEG): container finished" podID="9af0367c-139f-443d-9b2b-54908e88f39c" containerID="c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" exitCode=0 Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.221301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3"} Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.223411 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.280673 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.312423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.403114 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="398bfc2d-be02-491c-af23-69fc4fc24817" containerName="galera" probeResult="failure" output=< Jan 09 11:03:55 crc kubenswrapper[4727]: wsrep_local_state_comment (Joined) differs from Synced Jan 09 11:03:55 crc kubenswrapper[4727]: > Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.527321 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.623870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.626647 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.689824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v" (OuterVolumeSpecName: "kube-api-access-jql8v") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "kube-api-access-jql8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.743046 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.744966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.752592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config" (OuterVolumeSpecName: "config") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.769066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785521 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785560 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785571 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785584 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785596 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.887341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887636 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887680 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:57.887755449 +0000 UTC m=+1083.337660240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.235052 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerStarted","Data":"7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.236989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"64f12a2e916ba0978736fe5fcb0ce8bed71a92aea02c9a9e4d93c6d88a07c4ec"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.237028 4727 scope.go:117] "RemoveContainer" containerID="c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.237102 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.238847 4727 generic.go:334] "Generic (PLEG): container finished" podID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" exitCode=0 Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.240338 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.352382 4727 scope.go:117] "RemoveContainer" containerID="87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.372484 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.399605 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.873458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" path="/var/lib/kubelet/pods/9af0367c-139f-443d-9b2b-54908e88f39c/volumes" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.251031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.251802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.279304 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rj6lv" podStartSLOduration=4.279285221 podStartE2EDuration="4.279285221s" podCreationTimestamp="2026-01-09 11:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:57.273472884 +0000 UTC m=+1082.723377665" watchObservedRunningTime="2026-01-09 11:03:57.279285221 +0000 UTC m=+1082.729189992" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.927704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.927953 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.927982 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.928049 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:04:01.928025965 +0000 UTC m=+1087.377930746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:58 crc kubenswrapper[4727]: I0109 11:03:58.148677 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.271633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerStarted","Data":"fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7"} Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.301779 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-t2qwp" podStartSLOduration=2.033966724 podStartE2EDuration="5.301756146s" podCreationTimestamp="2026-01-09 11:03:54 +0000 UTC" firstStartedPulling="2026-01-09 11:03:55.324268535 +0000 UTC m=+1080.774173316" lastFinishedPulling="2026-01-09 11:03:58.592057957 +0000 UTC m=+1084.041962738" observedRunningTime="2026-01-09 11:03:59.297535629 +0000 UTC m=+1084.747440420" watchObservedRunningTime="2026-01-09 11:03:59.301756146 +0000 UTC m=+1084.751660927" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.671849 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.679614 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:03:59 crc kubenswrapper[4727]: E0109 11:03:59.679959 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="init" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.679978 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="init" Jan 09 11:03:59 crc kubenswrapper[4727]: E0109 11:03:59.680020 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680027 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680213 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.682933 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.693490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.763969 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.764112 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.875164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.879876 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.882649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.926999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.039659 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.503296 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.815399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.817914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.822706 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.905284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.905421 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.911294 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.913013 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.916145 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.920144 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007088 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007612 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.008558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.039561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.109285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.109354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.110247 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.119452 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.120833 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.132110 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.147364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.148912 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.211608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.211690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.213855 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.215333 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.225650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.232486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.232934 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291159 4727 generic.go:334] "Generic (PLEG): container finished" podID="8c043374-06a3-4cb4-b105-d448282169b0" containerID="508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65" exitCode=0 Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerDied","Data":"508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65"} Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerStarted","Data":"2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997"} Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316771 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.317580 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.338034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.418225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.418404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.419381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.434498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.533422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.563076 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.714046 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.746297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.929724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930439 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930484 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930581 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:04:09.930554282 +0000 UTC m=+1095.380459073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.998621 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.083485 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.308175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerStarted","Data":"1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.309941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerStarted","Data":"60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.312971 4727 generic.go:334] "Generic (PLEG): container finished" podID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerID="d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83" exitCode=0 Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.313589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerDied","Data":"d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.313746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerStarted","Data":"6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.316954 4727 generic.go:334] "Generic (PLEG): container finished" podID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerID="dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482" exitCode=0 Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.317203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerDied","Data":"dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.317339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerStarted","Data":"e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.825455 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.951353 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"8c043374-06a3-4cb4-b105-d448282169b0\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.951950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"8c043374-06a3-4cb4-b105-d448282169b0\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.953994 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c043374-06a3-4cb4-b105-d448282169b0" (UID: "8c043374-06a3-4cb4-b105-d448282169b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.970353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k" (OuterVolumeSpecName: "kube-api-access-5bq7k") pod "8c043374-06a3-4cb4-b105-d448282169b0" (UID: "8c043374-06a3-4cb4-b105-d448282169b0"). InnerVolumeSpecName "kube-api-access-5bq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.055160 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.055222 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.330589 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerID="4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.330693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.341917 4727 generic.go:334] "Generic (PLEG): container finished" podID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerID="a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.342021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerDied","Data":"a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.347060 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerID="00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.347265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerDied","Data":"00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerDied","Data":"2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356157 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356263 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.363158 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.363642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.395870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.564733 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.565011 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" containerID="cri-o://f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" gracePeriod=10 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.787580 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.854213 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879284 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.880899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3fe1de7-6846-464a-8c23-b5cbc944ffaf" (UID: "b3fe1de7-6846-464a-8c23-b5cbc944ffaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.883730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c54e2e39-4fb7-4ccb-98e4-437653bcc01c" (UID: "c54e2e39-4fb7-4ccb-98e4-437653bcc01c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.887330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h" (OuterVolumeSpecName: "kube-api-access-w9s5h") pod "c54e2e39-4fb7-4ccb-98e4-437653bcc01c" (UID: "c54e2e39-4fb7-4ccb-98e4-437653bcc01c"). InnerVolumeSpecName "kube-api-access-w9s5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.888595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx" (OuterVolumeSpecName: "kube-api-access-x7rpx") pod "b3fe1de7-6846-464a-8c23-b5cbc944ffaf" (UID: "b3fe1de7-6846-464a-8c23-b5cbc944ffaf"). InnerVolumeSpecName "kube-api-access-x7rpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981807 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981854 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981869 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981884 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.167347 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184368 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.195256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv" (OuterVolumeSpecName: "kube-api-access-5fltv") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "kube-api-access-5fltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.244180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config" (OuterVolumeSpecName: "config") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.244909 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.253484 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286750 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286789 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286799 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286808 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerDied","Data":"6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373653 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373335 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.376184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.376443 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378106 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerDied","Data":"e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378325 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.380681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.380966 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383057 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" exitCode=0 Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383277 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"90625e00836e35ec42870d8838b1bad64246fb7214b4d03011fe48a0e3903723"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383588 4727 scope.go:117] "RemoveContainer" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.433695 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.985151214 podStartE2EDuration="58.43366768s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:08.949566614 +0000 UTC m=+1034.399471395" lastFinishedPulling="2026-01-09 11:03:29.39808308 +0000 UTC m=+1054.847987861" observedRunningTime="2026-01-09 11:04:04.408835551 +0000 UTC m=+1089.858740352" watchObservedRunningTime="2026-01-09 11:04:04.43366768 +0000 UTC m=+1089.883572461" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.446184 4727 scope.go:117] "RemoveContainer" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.459986 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.752940661 podStartE2EDuration="58.459965247s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:08.620383342 +0000 UTC m=+1034.070288123" lastFinishedPulling="2026-01-09 11:03:29.327407928 +0000 UTC m=+1054.777312709" observedRunningTime="2026-01-09 11:04:04.457561575 +0000 UTC m=+1089.907466376" watchObservedRunningTime="2026-01-09 11:04:04.459965247 +0000 UTC m=+1089.909870038" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.484601 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.497158 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.498755 4727 scope.go:117] "RemoveContainer" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: E0109 11:04:04.502623 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": container with ID starting with f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276 not found: ID does not exist" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.502666 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} err="failed to get container status \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": rpc error: code = NotFound desc = could not find container \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": container with ID starting with f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276 not found: ID does not exist" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.502697 4727 scope.go:117] "RemoveContainer" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: E0109 11:04:04.507261 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": container with ID starting with 2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86 not found: ID does not exist" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.507304 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86"} err="failed to get container status \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": rpc error: code = NotFound desc = could not find container \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": container with ID starting with 2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86 not found: ID does not exist" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.810570 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.876011 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" path="/var/lib/kubelet/pods/7e8482c2-67f7-40f6-b225-af6914eed5c7/volumes" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.888888 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899105 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899148 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899231 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.901770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5dba580-00b4-4bed-a734-78ac96b5cd4d" (UID: "b5dba580-00b4-4bed-a734-78ac96b5cd4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.906337 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95" (OuterVolumeSpecName: "kube-api-access-b7w95") pod "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" (UID: "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9"). InnerVolumeSpecName "kube-api-access-b7w95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.906766 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7" (OuterVolumeSpecName: "kube-api-access-9lpr7") pod "b5dba580-00b4-4bed-a734-78ac96b5cd4d" (UID: "b5dba580-00b4-4bed-a734-78ac96b5cd4d"). InnerVolumeSpecName "kube-api-access-9lpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.000851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001136 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001154 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001166 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001358 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" (UID: "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.102433 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerDied","Data":"1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9"} Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392549 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392601 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerDied","Data":"60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53"} Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400224 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400452 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.373635 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374239 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374253 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374268 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374274 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374290 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374307 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="init" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374312 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="init" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374324 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374329 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374342 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374348 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374356 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374361 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374532 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374546 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374562 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374572 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374582 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374594 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.375112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.389467 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.480658 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.482391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.488559 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.495895 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.530558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.530625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632427 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.633795 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.650864 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.696391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.734352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.734735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.736005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.765959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.802419 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.316307 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:07 crc kubenswrapper[4727]: W0109 11:04:07.320672 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e8ff110_0416_4e41_b9cf_a9f622e9a4c8.slice/crio-88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89 WatchSource:0}: Error finding container 88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89: Status 404 returned error can't find the container with id 88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89 Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.420533 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a7df215-53c5-4771-95de-9af59255b3de" containerID="fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7" exitCode=0 Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.420557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerDied","Data":"fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7"} Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.423582 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerStarted","Data":"88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89"} Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.467661 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:07 crc kubenswrapper[4727]: W0109 11:04:07.470786 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5471acc_7f1a_4b92_babf_8dea0d8c5a5b.slice/crio-45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab WatchSource:0}: Error finding container 45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab: Status 404 returned error can't find the container with id 45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.201735 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.209161 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.278710 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.280141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.282996 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.291596 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.373059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.373449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.432203 4727 generic.go:334] "Generic (PLEG): container finished" podID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerID="538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2" exitCode=0 Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.432262 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerDied","Data":"538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434035 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerID="29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3" exitCode=0 Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434132 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerDied","Data":"29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerStarted","Data":"45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.481628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.481760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.482722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.500978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.640757 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.820937 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.878681 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c043374-06a3-4cb4-b105-d448282169b0" path="/var/lib/kubelet/pods/8c043374-06a3-4cb4-b105-d448282169b0/volumes" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991769 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991897 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.993885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.995535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.999197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2" (OuterVolumeSpecName: "kube-api-access-d5kn2") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "kube-api-access-d5kn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.000905 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.031134 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts" (OuterVolumeSpecName: "scripts") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.047599 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.064447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093682 4727 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093715 4727 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093725 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093737 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093748 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093757 4727 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093766 4727 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.099839 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:09 crc kubenswrapper[4727]: W0109 11:04:09.114851 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fbdc64_2108_41db_88bd_d978e9ce6550.slice/crio-e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164 WatchSource:0}: Error finding container e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164: Status 404 returned error can't find the container with id e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164 Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.443065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerStarted","Data":"4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.443141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerStarted","Data":"e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerDied","Data":"7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444473 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444587 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.470344 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j9h4f" podStartSLOduration=1.470315588 podStartE2EDuration="1.470315588s" podCreationTimestamp="2026-01-09 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:09.465882455 +0000 UTC m=+1094.915787246" watchObservedRunningTime="2026-01-09 11:04:09.470315588 +0000 UTC m=+1094.920220379" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.934186 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.939028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.949108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.013934 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.040376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041193 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041760 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" (UID: "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" (UID: "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.044524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn" (OuterVolumeSpecName: "kube-api-access-97fqn") pod "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" (UID: "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8"). InnerVolumeSpecName "kube-api-access-97fqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.044619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk" (OuterVolumeSpecName: "kube-api-access-7fbdk") pod "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" (UID: "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b"). InnerVolumeSpecName "kube-api-access-7fbdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.107129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143128 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143171 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143185 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143196 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466159 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerDied","Data":"45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466250 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.467984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerDied","Data":"88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.468010 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.468101 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.469255 4727 generic.go:334] "Generic (PLEG): container finished" podID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerID="4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d" exitCode=0 Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.469291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerDied","Data":"4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.720691 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:04:10 crc kubenswrapper[4727]: W0109 11:04:10.726768 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb71205e9_ee26_48fb_aeeb_58eaee9ac9cf.slice/crio-cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29 WatchSource:0}: Error finding container cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29: Status 404 returned error can't find the container with id cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29 Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.353434 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mwrp2" podUID="d81594ff-04f5-47c2-9620-db583609e9aa" containerName="ovn-controller" probeResult="failure" output=< Jan 09 11:04:11 crc kubenswrapper[4727]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 11:04:11 crc kubenswrapper[4727]: > Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.397497 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.397781 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.480790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29"} Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629887 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629917 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629927 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629937 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629964 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629974 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630189 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630216 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630234 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.631162 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: W0109 11:04:11.633605 4727 reflector.go:561] object-"openstack"/"ovncontroller-extra-scripts": failed to list *v1.ConfigMap: configmaps "ovncontroller-extra-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.633678 4727 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-extra-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovncontroller-extra-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.683175 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782990 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.783108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.783148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.820259 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.821950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.827049 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lsgwk" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.827359 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.852574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.887952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.888982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.891084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.891226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.895000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.913444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.094766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095147 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.099969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.100748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.111211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.129258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.195935 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.345220 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.402221 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"14fbdc64-2108-41db-88bd-d978e9ce6550\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.402631 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"14fbdc64-2108-41db-88bd-d978e9ce6550\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.404256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14fbdc64-2108-41db-88bd-d978e9ce6550" (UID: "14fbdc64-2108-41db-88bd-d978e9ce6550"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.433949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l" (OuterVolumeSpecName: "kube-api-access-hgf7l") pod "14fbdc64-2108-41db-88bd-d978e9ce6550" (UID: "14fbdc64-2108-41db-88bd-d978e9ce6550"). InnerVolumeSpecName "kube-api-access-hgf7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.500013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerDied","Data":"e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164"} Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.501233 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.501676 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.506923 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.506956 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.708523 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.710115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.871720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.059672 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.222123 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:13 crc kubenswrapper[4727]: W0109 11:04:13.245427 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef375b35_8012_4b0a_8aae_b95e88229bcd.slice/crio-ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b WatchSource:0}: Error finding container ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b: Status 404 returned error can't find the container with id ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"0610193605ed8e7c0c06c6965309dcfdd633bf38059da2cf5c4d111db7fbee40"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"e53fc0a831b11b95c3a849263dd21707f950fde33b7ea43c295ad58c7410e1c6"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"52e33a498a1040a65fe5f8e0c1ffaa114b5f0f60b4d1deb5461c8f6a7b7a5b7d"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"12df1bdbadf6d7d355bdf4f0dd78448a115effa901893dca3ecd0d71d496e543"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.521135 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerStarted","Data":"863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.522473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerStarted","Data":"ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b"} Jan 09 11:04:14 crc kubenswrapper[4727]: I0109 11:04:14.532598 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerID="5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050" exitCode=0 Jan 09 11:04:14 crc kubenswrapper[4727]: I0109 11:04:14.532784 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerDied","Data":"5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.544641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"6e46a1d193de258d86b97fb51b561f7d9eb130d5445274f3ae94ad67bec78835"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.544977 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"5e39fe941769bf53adc7222084ae7536ec9c2a373c1d360a37f70bfde09a2fdc"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.820161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.005916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.005986 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006016 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006040 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006109 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006324 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run" (OuterVolumeSpecName: "var-run") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006422 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006933 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006967 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006979 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.007313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.007474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts" (OuterVolumeSpecName: "scripts") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.015714 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv" (OuterVolumeSpecName: "kube-api-access-72wkv") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "kube-api-access-72wkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.108485 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.109722 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.109753 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.351998 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mwrp2" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.578984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"e962c3fe27b7e5dd9cdf6e8793b4e08269600f40d1ef2c69fd12ac8cc4ddcc7c"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.579033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"ec50fdd9c43320e397bf4728bf96742836509010dccf7afdaa0c3c08fa19ba83"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584584 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerDied","Data":"ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584629 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584687 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.969443 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.979875 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096006 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:17 crc kubenswrapper[4727]: E0109 11:04:17.096386 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096401 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: E0109 11:04:17.096413 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096421 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096634 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096655 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.098473 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.111721 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.119784 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232584 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334909 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335652 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.337932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.338011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.360799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.456482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.929466 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.952603 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.293841 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.411638 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.412970 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.492500 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.493648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.493685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.530350 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.531667 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.541618 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.542999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.545751 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.547101 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.557953 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.598411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.598992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.599026 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.599070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.601287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.648912 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.652303 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.653709 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.667526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.679124 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.702694 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"c20d1e2d582bc7f3e0eb7d81bedadcb85a573b4d0b36134aa6e97e6e154971f0"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.702745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"45161b2783b38281c7608b606273cf4cbdcc1181b089d9ab210dbffe47203b2f"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703758 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.704572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.714006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerStarted","Data":"4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.733026 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.742225 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.743978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753709 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753842 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.754056 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.775316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.781923 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.807385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812540 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.813401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.821184 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.822272 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.837189 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.843427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.889587 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" path="/var/lib/kubelet/pods/ef375b35-8012-4b0a-8aae-b95e88229bcd/volumes" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914567 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914711 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.915844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.916984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.928814 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.929290 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.930493 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.939435 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.941270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.950484 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018788 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.019042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.027308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.028180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.028266 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.057905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.058826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.086629 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.102087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.121535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.122309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.145306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.171774 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.262738 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.498564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.548937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.559776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.581014 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.731437 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.750571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerStarted","Data":"65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.793352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"9389358f597f41d1b1e23b0c3a124fc67fe2b6d451b65dbbb428a1df6d2952f8"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.812015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerStarted","Data":"978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.814124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerStarted","Data":"12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.816040 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerStarted","Data":"6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.857632 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwrp2-config-k2cwc" podStartSLOduration=2.8576075 podStartE2EDuration="2.8576075s" podCreationTimestamp="2026-01-09 11:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:19.846648436 +0000 UTC m=+1105.296553237" watchObservedRunningTime="2026-01-09 11:04:19.8576075 +0000 UTC m=+1105.307512281" Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.231466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.246063 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.269657 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:20 crc kubenswrapper[4727]: E0109 11:04:20.558435 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108eb21f_902c_4942_8be4_9a3b11146c25.slice/crio-958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc14bbd99_7e5d_48ab_8573_ad9c5eea68fb.slice/crio-d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832036 4727 generic.go:334] "Generic (PLEG): container finished" podID="108eb21f-902c-4942-8be4-9a3b11146c25" containerID="958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerDied","Data":"958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerStarted","Data":"2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.838589 4727 generic.go:334] "Generic (PLEG): container finished" podID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerID="d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.838685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerDied","Data":"d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.841446 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerStarted","Data":"33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.843975 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerID="978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.844090 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerDied","Data":"978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.852065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerStarted","Data":"bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.857969 4727 generic.go:334] "Generic (PLEG): container finished" podID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerID="fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.858096 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerDied","Data":"fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.864198 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerID="5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.875670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerDied","Data":"5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.875829 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerStarted","Data":"14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.916560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"0140e7dbdc255dcc4032eb1e51762ba2ce51bdd602f600e3921063b5ce0ea817"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.916612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"07c7499980dde0afe32fb192c15140b9713a74ded26008ea6303b08d705a095b"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.932481 4727 generic.go:334] "Generic (PLEG): container finished" podID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerID="8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674" exitCode=0 Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.932896 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerDied","Data":"8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.965473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"76dcb50f7413cf7fdbb3bda5ea1e633c3dfc1d3d8958b9892fce69d3af15ffd9"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.965553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"5b0090596a500c66b0e4d37e7ce2d61925436a372e9b9b2c65d0c8ff5c0ee7fe"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.977132 4727 generic.go:334] "Generic (PLEG): container finished" podID="46480603-3f1d-4589-ba8e-9026edee07c7" containerID="1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9" exitCode=0 Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.977448 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerDied","Data":"1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9"} Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.031714 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.373743476 podStartE2EDuration="29.031691282s" podCreationTimestamp="2026-01-09 11:03:53 +0000 UTC" firstStartedPulling="2026-01-09 11:04:10.728916211 +0000 UTC m=+1096.178820992" lastFinishedPulling="2026-01-09 11:04:17.386864017 +0000 UTC m=+1102.836768798" observedRunningTime="2026-01-09 11:04:22.012089449 +0000 UTC m=+1107.461994240" watchObservedRunningTime="2026-01-09 11:04:22.031691282 +0000 UTC m=+1107.481596063" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.298835 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.300447 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.305825 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.321913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405342 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507549 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.508906 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509886 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.510483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.565448 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.631605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.469602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.475841 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.492750 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"108eb21f-902c-4942-8be4-9a3b11146c25\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574684 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"108eb21f-902c-4942-8be4-9a3b11146c25\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574741 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"46480603-3f1d-4589-ba8e-9026edee07c7\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574798 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574900 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"46480603-3f1d-4589-ba8e-9026edee07c7\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.575485 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "108eb21f-902c-4942-8be4-9a3b11146c25" (UID: "108eb21f-902c-4942-8be4-9a3b11146c25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.575490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" (UID: "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.576221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46480603-3f1d-4589-ba8e-9026edee07c7" (UID: "46480603-3f1d-4589-ba8e-9026edee07c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.580159 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc" (OuterVolumeSpecName: "kube-api-access-pjddc") pod "46480603-3f1d-4589-ba8e-9026edee07c7" (UID: "46480603-3f1d-4589-ba8e-9026edee07c7"). InnerVolumeSpecName "kube-api-access-pjddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.580758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf" (OuterVolumeSpecName: "kube-api-access-hcpkf") pod "108eb21f-902c-4942-8be4-9a3b11146c25" (UID: "108eb21f-902c-4942-8be4-9a3b11146c25"). InnerVolumeSpecName "kube-api-access-hcpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.582342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4" (OuterVolumeSpecName: "kube-api-access-mvbz4") pod "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" (UID: "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb"). InnerVolumeSpecName "kube-api-access-mvbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678180 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678229 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678242 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678251 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678261 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678269 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerDied","Data":"2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046738 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046801 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.055962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerDied","Data":"65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.056020 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.056103 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.063934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerDied","Data":"bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.063998 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.064049 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.199180 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.202819 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kx4zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-9gv8v_openstack(e5667805-aff5-4227-88df-2d2440259e9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.204403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-9gv8v" podUID="e5667805-aff5-4227-88df-2d2440259e9b" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.402000 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.427496 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.439952 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.466725 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.481793 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"4ad382ed-924d-4c03-88b2-63d89690a56a\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.481993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"4ad382ed-924d-4c03-88b2-63d89690a56a\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482212 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"22d06cd8-5172-4755-93f0-6c6aa036bed8\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482249 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482746 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"22d06cd8-5172-4755-93f0-6c6aa036bed8\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.483921 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d06cd8-5172-4755-93f0-6c6aa036bed8" (UID: "22d06cd8-5172-4755-93f0-6c6aa036bed8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.484285 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1b70879-a5de-4ea1-9db1-82d9f0416a71" (UID: "c1b70879-a5de-4ea1-9db1-82d9f0416a71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.484413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ad382ed-924d-4c03-88b2-63d89690a56a" (UID: "4ad382ed-924d-4c03-88b2-63d89690a56a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.490906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84" (OuterVolumeSpecName: "kube-api-access-s9r84") pod "c1b70879-a5de-4ea1-9db1-82d9f0416a71" (UID: "c1b70879-a5de-4ea1-9db1-82d9f0416a71"). InnerVolumeSpecName "kube-api-access-s9r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491310 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491336 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491347 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491356 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.495427 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq" (OuterVolumeSpecName: "kube-api-access-vhrnq") pod "4ad382ed-924d-4c03-88b2-63d89690a56a" (UID: "4ad382ed-924d-4c03-88b2-63d89690a56a"). InnerVolumeSpecName "kube-api-access-vhrnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.508284 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz" (OuterVolumeSpecName: "kube-api-access-vl9cz") pod "22d06cd8-5172-4755-93f0-6c6aa036bed8" (UID: "22d06cd8-5172-4755-93f0-6c6aa036bed8"). InnerVolumeSpecName "kube-api-access-vl9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.592937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.593587 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594029 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594247 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run" (OuterVolumeSpecName: "var-run") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.597783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh" (OuterVolumeSpecName: "kube-api-access-88kfh") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "kube-api-access-88kfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.598729 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.598907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599766 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599881 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599886 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts" (OuterVolumeSpecName: "scripts") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599899 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599938 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599950 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599963 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599974 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.702057 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.702157 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.724210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:34 crc kubenswrapper[4727]: W0109 11:04:34.733970 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb73609c1_ae60_4f6e_a0eb_e36b1fa9e977.slice/crio-77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0 WatchSource:0}: Error finding container 77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0: Status 404 returned error can't find the container with id 77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0 Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108091 4727 generic.go:334] "Generic (PLEG): container finished" podID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerID="305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4" exitCode=0 Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerStarted","Data":"77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.109854 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.110891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerDied","Data":"14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.110984 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112755 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerDied","Data":"4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112851 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119158 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerDied","Data":"12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119331 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.122554 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerStarted","Data":"6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.128651 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.129139 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerDied","Data":"6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.129178 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4" Jan 09 11:04:35 crc kubenswrapper[4727]: E0109 11:04:35.131753 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-9gv8v" podUID="e5667805-aff5-4227-88df-2d2440259e9b" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.206882 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4xh9m" podStartSLOduration=3.004370555 podStartE2EDuration="24.206829342s" podCreationTimestamp="2026-01-09 11:04:11 +0000 UTC" firstStartedPulling="2026-01-09 11:04:13.078468458 +0000 UTC m=+1098.528373239" lastFinishedPulling="2026-01-09 11:04:34.280927245 +0000 UTC m=+1119.730832026" observedRunningTime="2026-01-09 11:04:35.191103721 +0000 UTC m=+1120.641008502" watchObservedRunningTime="2026-01-09 11:04:35.206829342 +0000 UTC m=+1120.656734123" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.586589 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.614702 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.137267 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerStarted","Data":"716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30"} Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.137658 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.180496 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" podStartSLOduration=14.180470749 podStartE2EDuration="14.180470749s" podCreationTimestamp="2026-01-09 11:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:36.176217507 +0000 UTC m=+1121.626122288" watchObservedRunningTime="2026-01-09 11:04:36.180470749 +0000 UTC m=+1121.630375540" Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.871796 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" path="/var/lib/kubelet/pods/2a1ee6a4-df6b-475f-89b5-2387d3664091/volumes" Jan 09 11:04:41 crc kubenswrapper[4727]: I0109 11:04:41.190894 4727 generic.go:334] "Generic (PLEG): container finished" podID="64657563-7e2f-46ef-a906-37e42398662a" containerID="6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8" exitCode=0 Jan 09 11:04:41 crc kubenswrapper[4727]: I0109 11:04:41.190982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerDied","Data":"6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8"} Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.634801 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.650316 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.706906 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.707222 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rj6lv" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" containerID="cri-o://0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" gracePeriod=10 Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768795 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768854 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.775526 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc" (OuterVolumeSpecName: "kube-api-access-5wgrc") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "kube-api-access-5wgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.775798 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.824048 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.842013 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data" (OuterVolumeSpecName: "config-data") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870718 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870755 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870768 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870779 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.090129 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.175968 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176444 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176605 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176708 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.182617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp" (OuterVolumeSpecName: "kube-api-access-pc6zp") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "kube-api-access-pc6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216410 4727 generic.go:334] "Generic (PLEG): container finished" podID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" exitCode=0 Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216623 4727 scope.go:117] "RemoveContainer" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216949 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.217864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerDied","Data":"863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218451 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218544 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.222033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.231076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.233864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config" (OuterVolumeSpecName: "config") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.250689 4727 scope.go:117] "RemoveContainer" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271132 4727 scope.go:117] "RemoveContainer" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.271739 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": container with ID starting with 0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd not found: ID does not exist" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271783 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} err="failed to get container status \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": rpc error: code = NotFound desc = could not find container \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": container with ID starting with 0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd not found: ID does not exist" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271812 4727 scope.go:117] "RemoveContainer" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.272320 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": container with ID starting with e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1 not found: ID does not exist" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.272342 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} err="failed to get container status \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": rpc error: code = NotFound desc = could not find container \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": container with ID starting with e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1 not found: ID does not exist" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278676 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278850 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278872 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278885 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278898 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.593486 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.601434 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.687739 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688090 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688103 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688111 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688117 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688125 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688131 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688143 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688766 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688785 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688791 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688805 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688835 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688841 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688849 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="init" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688854 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="init" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688863 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688870 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688885 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688891 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689076 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689093 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689099 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689112 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689125 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689133 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689142 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689150 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689159 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.690230 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.712783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813413 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.914991 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915055 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916659 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916761 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.962483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.012257 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.515571 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.871173 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" path="/var/lib/kubelet/pods/72decd78-911c-43ff-9f4e-0d99d71cf84b/volumes" Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245445 4727 generic.go:334] "Generic (PLEG): container finished" podID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerID="23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c" exitCode=0 Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245529 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c"} Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerStarted","Data":"3733ac359dd21c51d5f253b5404b05214c66a4c3eae7bdfe4843f65505ecec15"} Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.256837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerStarted","Data":"6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5"} Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.257209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.283878 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podStartSLOduration=3.283839187 podStartE2EDuration="3.283839187s" podCreationTimestamp="2026-01-09 11:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:46.279245976 +0000 UTC m=+1131.729150837" watchObservedRunningTime="2026-01-09 11:04:46.283839187 +0000 UTC m=+1131.733743998" Jan 09 11:04:52 crc kubenswrapper[4727]: I0109 11:04:52.326128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerStarted","Data":"9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef"} Jan 09 11:04:52 crc kubenswrapper[4727]: I0109 11:04:52.353674 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9gv8v" podStartSLOduration=3.305173895 podStartE2EDuration="34.35364736s" podCreationTimestamp="2026-01-09 11:04:18 +0000 UTC" firstStartedPulling="2026-01-09 11:04:20.305074489 +0000 UTC m=+1105.754979270" lastFinishedPulling="2026-01-09 11:04:51.353547954 +0000 UTC m=+1136.803452735" observedRunningTime="2026-01-09 11:04:52.350951312 +0000 UTC m=+1137.800856103" watchObservedRunningTime="2026-01-09 11:04:52.35364736 +0000 UTC m=+1137.803552181" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.014608 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.105102 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.105427 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" containerID="cri-o://716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" gracePeriod=10 Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.376625 4727 generic.go:334] "Generic (PLEG): container finished" podID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerID="716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" exitCode=0 Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.377027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30"} Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.686009 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834122 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834698 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834777 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.835044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.861172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7" (OuterVolumeSpecName: "kube-api-access-h29v7") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "kube-api-access-h29v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.923023 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config" (OuterVolumeSpecName: "config") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.941897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943184 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943221 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943234 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.945697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.948598 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.955278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045061 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045104 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045117 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388419 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0"} Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388474 4727 scope.go:117] "RemoveContainer" containerID="716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388501 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.391529 4727 generic.go:334] "Generic (PLEG): container finished" podID="e5667805-aff5-4227-88df-2d2440259e9b" containerID="9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef" exitCode=0 Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.391544 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerDied","Data":"9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef"} Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.425687 4727 scope.go:117] "RemoveContainer" containerID="305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.448331 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.460115 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.735710 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.870767 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" path="/var/lib/kubelet/pods/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977/volumes" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903569 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.910338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx" (OuterVolumeSpecName: "kube-api-access-kx4zx") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "kube-api-access-kx4zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.933667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.950919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data" (OuterVolumeSpecName: "config-data") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006337 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006374 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006412 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.418308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerDied","Data":"33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f"} Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.418372 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.419047 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.746866 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747276 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747283 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747313 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="init" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747321 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="init" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747567 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747579 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.748461 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.764297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.922960 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923851 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.924057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.929689 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.942386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.954276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.954871 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.955146 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.955759 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.961555 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.991388 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030718 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030761 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030906 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.032118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.032696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.033230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.036300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.036477 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.084913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137826 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.166244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.185917 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.189348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.205718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.207539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.218146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.300332 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.342780 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.349841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.370267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416325 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9frsk" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416672 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416790 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416915 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.439292 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445404 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.487829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.489265 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.505028 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.505227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zbdpv" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.513232 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.526942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.532708 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.532929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.562943 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563068 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563181 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563312 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563335 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563651 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.564433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.568320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.580146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.580579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.588570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.631640 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.633097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.639650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.639904 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f596n" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.640267 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.651323 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.665824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678411 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678566 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.679154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.685850 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.695939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.703863 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.705158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.710102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.711241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.716321 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.739300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.749398 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.749879 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.755170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.783128 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796114 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796336 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796448 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fql5g" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.804005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.808690 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.814637 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.816526 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826248 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826349 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hx5p2" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826541 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.839542 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.850213 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.867214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.892982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893311 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893414 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.904078 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.931945 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.946303 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.950531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.952658 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lsgwk" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953441 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953593 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953738 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.974641 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.976714 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.987290 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.000002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001901 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.002074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.007922 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.016272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.017716 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.020605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.033218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.036073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.039172 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.039293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.046484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.051184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.056617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.064721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.074924 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.132611 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139859 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.173591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.174276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.175914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.191631 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.236612 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.238579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248871 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249918 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.250175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.252628 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.252963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253501 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253775 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.255133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.260095 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.263789 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.264766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.265335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.266837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.273785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.284365 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.351869 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.351947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352308 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352547 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352770 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.387888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.409654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456171 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456199 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456413 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.458332 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.463240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.464224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.465040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.465123 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.466350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.466532 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.467999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.471481 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.472634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.473272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.502825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.524424 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.586706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.609776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.512534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.571722 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.623220 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.653221 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.655745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.682075 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.772346 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793887 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793926 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793957 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.794023 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.834026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896693 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.897293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.899627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.900498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.916486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.917024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.028561 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.472103 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.484087 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.496405 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.513709 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.537030 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:01 crc kubenswrapper[4727]: E0109 11:05:01.547032 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.549096 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.571845 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda52e2c52_54f3_4f0d_9244_1ce7563deb78.slice/crio-22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b WatchSource:0}: Error finding container 22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b: Status 404 returned error can't find the container with id 22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.582453 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.616368 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f7de868_87b0_49c7_ad5e_7c528f181550.slice/crio-a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb WatchSource:0}: Error finding container a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb: Status 404 returned error can't find the container with id a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.628590 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.639151 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3179052d_0a48_4988_9696_814faeb20563.slice/crio-829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb WatchSource:0}: Error finding container 829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb: Status 404 returned error can't find the container with id 829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.639293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9bd79bb5-sgxjp" event={"ID":"718817e7-7114-4473-84e7-56349b861c3e","Type":"ContainerStarted","Data":"4ab00658b972d762f35df32ce42e03171f3c7a20dae5a1fc6a4479d78d970b43"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.659803 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.668903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerStarted","Data":"9f4d6e1e84339b6e76c479a4901b4c944d69b816e4b882b5ef6e50a8f5fbe884"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.672634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerStarted","Data":"feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.674047 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerStarted","Data":"22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.675725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerStarted","Data":"afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.675757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerStarted","Data":"e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.695850 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.707607 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.716078 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s6xvj" podStartSLOduration=4.716053719 podStartE2EDuration="4.716053719s" podCreationTimestamp="2026-01-09 11:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:01.705480766 +0000 UTC m=+1147.155385547" watchObservedRunningTime="2026-01-09 11:05:01.716053719 +0000 UTC m=+1147.165958490" Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.351562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.712677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.718080 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerStarted","Data":"61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.718110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerStarted","Data":"9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.720664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerStarted","Data":"a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.726794 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerID="0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06" exitCode=0 Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.726851 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.732269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"235f1dbc729d8400ff61a870ff838d607f5f0556e4de01c9b178e4d7a4a3f9ca"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.749019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-95bf4c4d9-vwkb9" event={"ID":"1accd238-8dda-4882-b66b-96aefeb84df4","Type":"ContainerStarted","Data":"931c8c326cbc00e09537bfff38f3cacf375f75e745d5be55085827239bd67b5e"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.795386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.795450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"161733537e9e43d072567aa0ebaf5bb7558fb6a7b7b38d11ea7ae89487092ac8"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.818126 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mfhnm" podStartSLOduration=4.81808074 podStartE2EDuration="4.81808074s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:02.739139555 +0000 UTC m=+1148.189044336" watchObservedRunningTime="2026-01-09 11:05:02.81808074 +0000 UTC m=+1148.267985531" Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859642 4727 generic.go:334] "Generic (PLEG): container finished" podID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerID="35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a" exitCode=0 Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerDied","Data":"35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerStarted","Data":"668d8a9a9ca39af05b665b849588a7df468c64707de37a55a4948a01511a92ba"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.884633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf8ff49dc-bkwp8" event={"ID":"19039fe6-ce4a-4e84-b355-9ed185f05060","Type":"ContainerStarted","Data":"a45c0fe9b2415ced716e83b8091dd784775539c8582b821d3ea575bffcd3c2b8"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.527045 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691308 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691363 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691601 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691658 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.718964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m" (OuterVolumeSpecName: "kube-api-access-lwz6m") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "kube-api-access-lwz6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.739416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config" (OuterVolumeSpecName: "config") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.747553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.748880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.768209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.779302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796195 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796230 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796242 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796250 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796260 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796268 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerDied","Data":"668d8a9a9ca39af05b665b849588a7df468c64707de37a55a4948a01511a92ba"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940741 4727 scope.go:117] "RemoveContainer" containerID="35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940751 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.951339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerStarted","Data":"0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.951406 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.980478 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" podStartSLOduration=5.980458042 podStartE2EDuration="5.980458042s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:03.977271411 +0000 UTC m=+1149.427176192" watchObservedRunningTime="2026-01-09 11:05:03.980458042 +0000 UTC m=+1149.430362823" Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.028554 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.039056 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.890961 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" path="/var/lib/kubelet/pods/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f/volumes" Jan 09 11:05:05 crc kubenswrapper[4727]: I0109 11:05:05.027000 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.064969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.065595 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" containerID="cri-o://d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.065682 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" containerID="cri-o://472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075315 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075558 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" containerID="cri-o://6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075721 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" containerID="cri-o://912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.111444 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.111420377 podStartE2EDuration="8.111420377s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:06.107724902 +0000 UTC m=+1151.557629683" watchObservedRunningTime="2026-01-09 11:05:06.111420377 +0000 UTC m=+1151.561325158" Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.147263 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.147233855 podStartE2EDuration="8.147233855s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:06.142960633 +0000 UTC m=+1151.592865414" watchObservedRunningTime="2026-01-09 11:05:06.147233855 +0000 UTC m=+1151.597138636" Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.948542 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.094780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095201 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095340 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.097375 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs" (OuterVolumeSpecName: "logs") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.098391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.102933 4727 generic.go:334] "Generic (PLEG): container finished" podID="08d6e612-28e9-41fc-8409-799a7a033814" containerID="912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" exitCode=0 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.102972 4727 generic.go:334] "Generic (PLEG): container finished" podID="08d6e612-28e9-41fc-8409-799a7a033814" containerID="6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" exitCode=143 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.103012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.103041 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107032 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" exitCode=0 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107065 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" exitCode=143 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"161733537e9e43d072567aa0ebaf5bb7558fb6a7b7b38d11ea7ae89487092ac8"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107134 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107283 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.114171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q" (OuterVolumeSpecName: "kube-api-access-gfw4q") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "kube-api-access-gfw4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.114616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts" (OuterVolumeSpecName: "scripts") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.146778 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.164308 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.164904 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data" (OuterVolumeSpecName: "config-data") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.173811 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.198953 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200544 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200568 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200582 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200618 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200630 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200641 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200651 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200710 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.242327 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302292 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302498 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302542 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303466 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs" (OuterVolumeSpecName: "logs") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303500 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.309774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts" (OuterVolumeSpecName: "scripts") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.311688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.312531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf" (OuterVolumeSpecName: "kube-api-access-zmlqf") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "kube-api-access-zmlqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.316119 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.329485 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.329549 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} err="failed to get container status \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.329579 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.331633 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.331653 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} err="failed to get container status \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.331668 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332081 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} err="failed to get container status \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332096 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332291 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} err="failed to get container status \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.358578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.360634 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.383105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data" (OuterVolumeSpecName: "config-data") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410363 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410746 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410816 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410981 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411064 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411125 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411177 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.431150 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.487625 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.502105 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.516426 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.528906 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.530906 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.530987 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531049 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531060 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531079 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531090 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531137 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531147 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531172 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531180 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531565 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531626 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531651 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531660 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531669 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.535412 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.542071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.543832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.582116 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.610320 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.652748 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.654393 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.662070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.670278 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.700562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.701440 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-xjqb7 logs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="18f7b91d-8aea-4cb4-bd21-3e29eadcf668" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725701 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725752 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725945 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.726310 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.727722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.728775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.751274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.751271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.756438 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.758009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.759035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.763304 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.802029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.815099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.816789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.840424 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.844280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.847408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.847718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.848028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.864573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938953 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.009291 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041093 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041333 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.046169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.047245 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.051222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.069131 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"235f1dbc729d8400ff61a870ff838d607f5f0556e4de01c9b178e4d7a4a3f9ca"} Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122102 4727 scope.go:117] "RemoveContainer" containerID="912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122059 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.129710 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerID="afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd" exitCode=0 Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.129794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.130173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerDied","Data":"afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd"} Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.151518 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.156984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249502 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249587 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249711 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249798 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.250233 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs" (OuterVolumeSpecName: "logs") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.251244 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.252709 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261751 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261786 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261890 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.265447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts" (OuterVolumeSpecName: "scripts") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.271272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.271493 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.274966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data" (OuterVolumeSpecName: "config-data") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.280090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7" (OuterVolumeSpecName: "kube-api-access-xjqb7") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "kube-api-access-xjqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.306740 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.325351 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.341338 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.343966 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.348732 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.349101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.355195 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366863 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366902 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366917 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366929 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366943 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366979 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.389763 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468901 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468921 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469085 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570935 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570958 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571819 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.572143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.574361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.594187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.594445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.596096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.604388 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.616480 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.662193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.682616 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.874266 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d6e612-28e9-41fc-8409-799a7a033814" path="/var/lib/kubelet/pods/08d6e612-28e9-41fc-8409-799a7a033814/volumes" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.875691 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" path="/var/lib/kubelet/pods/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6/volumes" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.154274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.253229 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.267403 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.288113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.290124 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.293797 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.293904 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.302520 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.405389 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.405995 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.424941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.424994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527716 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527792 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527935 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.528023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.528308 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.529484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.529885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.535542 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.539264 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.539950 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.540602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.546479 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.566114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.589725 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.620588 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.673226 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.674166 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" containerID="cri-o://6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" gracePeriod=10 Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.166624 4727 generic.go:334] "Generic (PLEG): container finished" podID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerID="6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" exitCode=0 Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.166675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5"} Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.873993 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f7b91d-8aea-4cb4-bd21-3e29eadcf668" path="/var/lib/kubelet/pods/18f7b91d-8aea-4cb4-bd21-3e29eadcf668/volumes" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.515294 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.615236 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p" (OuterVolumeSpecName: "kube-api-access-wn82p") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "kube-api-access-wn82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.616181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.616987 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts" (OuterVolumeSpecName: "scripts") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.622926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.644352 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data" (OuterVolumeSpecName: "config-data") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.659581 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.713924 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714352 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714450 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714555 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714644 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.715407 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: E0109 11:05:11.959757 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerDied","Data":"e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424"} Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203138 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203194 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.604926 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.611613 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.706619 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:12 crc kubenswrapper[4727]: E0109 11:05:12.707090 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707112 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707296 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.710094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.710446 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.711017 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.711222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.712036 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.720585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840230 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840393 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840654 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.874670 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" path="/var/lib/kubelet/pods/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44/volumes" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.942936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.944900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.945060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.948119 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.948858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.954127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.960888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.961308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.963944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:13 crc kubenswrapper[4727]: I0109 11:05:13.035566 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:14 crc kubenswrapper[4727]: I0109 11:05:14.014245 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.207689 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:22 crc kubenswrapper[4727]: I0109 11:05:22.745882 4727 scope.go:117] "RemoveContainer" containerID="6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.749764 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.749941 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h7h5bch5cfh57ch8h66ch5b4h5f5h699h66hb4h574hc7h65bh5d8hd8h677h79hc4h5bh5cch677h5b8h668h64bh8h58h5dfh79hcfh68cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hs2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-9bd79bb5-sgxjp_openstack(718817e7-7114-4473-84e7-56349b861c3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.752874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-9bd79bb5-sgxjp" podUID="718817e7-7114-4473-84e7-56349b861c3e" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.766743 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.767045 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h698h5dchbch656h5cch5ddh585h5fch699h65bh99h68chcbh654h55dh5c4h588h5cfh76h75h5dh599h575h698hfbh5f5h9dh696h58dh8fh5dfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwh9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-cf8ff49dc-bkwp8_openstack(19039fe6-ce4a-4e84-b355-9ed185f05060): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.769078 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-cf8ff49dc-bkwp8" podUID="19039fe6-ce4a-4e84-b355-9ed185f05060" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.297699 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.297951 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n92p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pss24_openstack(a52e2c52-54f3-4f0d-9244-1ce7563deb78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.299219 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pss24" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.314424 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.314628 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h644h57ch5bbh5d4hc7hddh586h5ddh57bh5b7h5d4hfbh68bh6bh9dh68bh5f6h54bh586h695h56h84h56ch7fh58fh5ffh559hf8h666h579h564q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtckg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-95bf4c4d9-vwkb9_openstack(1accd238-8dda-4882-b66b-96aefeb84df4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.326591 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pss24" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.337861 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-95bf4c4d9-vwkb9" podUID="1accd238-8dda-4882-b66b-96aefeb84df4" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.485224 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583406 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583544 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583726 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583805 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.600686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd" (OuterVolumeSpecName: "kube-api-access-ml5xd") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "kube-api-access-ml5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.644401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.668342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.668456 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.682264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697751 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697797 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697811 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697821 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697832 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.698868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config" (OuterVolumeSpecName: "config") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.799895 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.013389 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.031108 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.336007 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"3733ac359dd21c51d5f253b5404b05214c66a4c3eae7bdfe4843f65505ecec15"} Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.336033 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.414941 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.425534 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.873106 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" path="/var/lib/kubelet/pods/863b94ea-e707-4c6a-8aa3-3241733e5257/volumes" Jan 09 11:05:27 crc kubenswrapper[4727]: I0109 11:05:27.375292 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerID="61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e" exitCode=0 Jan 09 11:05:27 crc kubenswrapper[4727]: I0109 11:05:27.375487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerDied","Data":"61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e"} Jan 09 11:05:32 crc kubenswrapper[4727]: E0109 11:05:32.457789 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:33 crc kubenswrapper[4727]: E0109 11:05:33.825762 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 09 11:05:33 crc kubenswrapper[4727]: E0109 11:05:33.826378 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h7dh56ch8fh58dh8dh57h8chfh577h66bh9fh75hf5h555h644h75h58dhfch66h645hf7h689h579h55bh6fhdfh95h5b7h5d8hd8h56q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6746,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3179052d-0a48-4988-9696-814faeb20563): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:33 crc kubenswrapper[4727]: I0109 11:05:33.905245 4727 scope.go:117] "RemoveContainer" containerID="6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.048599 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.079933 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097836 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097964 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097998 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs" (OuterVolumeSpecName: "logs") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.099414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data" (OuterVolumeSpecName: "config-data") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.102343 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs" (OuterVolumeSpecName: "logs") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.102350 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.103435 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data" (OuterVolumeSpecName: "config-data") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.105176 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts" (OuterVolumeSpecName: "scripts") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.105217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts" (OuterVolumeSpecName: "scripts") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.110814 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.110872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.111218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b" (OuterVolumeSpecName: "kube-api-access-pwh9b") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "kube-api-access-pwh9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.123395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v" (OuterVolumeSpecName: "kube-api-access-2hs2v") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "kube-api-access-2hs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.141982 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202749 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202878 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202903 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs" (OuterVolumeSpecName: "logs") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203425 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204057 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204073 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204086 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204096 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204105 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204117 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204126 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204136 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204145 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204121 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts" (OuterVolumeSpecName: "scripts") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204156 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204165 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data" (OuterVolumeSpecName: "config-data") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.208429 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.208680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp" (OuterVolumeSpecName: "kube-api-access-jsvmp") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "kube-api-access-jsvmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.211252 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg" (OuterVolumeSpecName: "kube-api-access-jtckg") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "kube-api-access-jtckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.234705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config" (OuterVolumeSpecName: "config") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.240850 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.328957 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.328999 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329012 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329023 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329033 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329043 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329052 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.375759 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.451414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9bd79bb5-sgxjp" event={"ID":"718817e7-7114-4473-84e7-56349b861c3e","Type":"ContainerDied","Data":"4ab00658b972d762f35df32ce42e03171f3c7a20dae5a1fc6a4479d78d970b43"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.451533 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457175 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerDied","Data":"9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457366 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.459768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.459799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-95bf4c4d9-vwkb9" event={"ID":"1accd238-8dda-4882-b66b-96aefeb84df4","Type":"ContainerDied","Data":"931c8c326cbc00e09537bfff38f3cacf375f75e745d5be55085827239bd67b5e"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.466188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf8ff49dc-bkwp8" event={"ID":"19039fe6-ce4a-4e84-b355-9ed185f05060","Type":"ContainerDied","Data":"a45c0fe9b2415ced716e83b8091dd784775539c8582b821d3ea575bffcd3c2b8"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.466394 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.479817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"64cc505548582ff0b92efe52617ea9736e870feb1d2d85557f334e68ae42a742"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.510090 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.529252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.538793 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.592485 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.607587 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.626110 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.638002 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.646319 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.654863 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.872372 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19039fe6-ce4a-4e84-b355-9ed185f05060" path="/var/lib/kubelet/pods/19039fe6-ce4a-4e84-b355-9ed185f05060/volumes" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.873056 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1accd238-8dda-4882-b66b-96aefeb84df4" path="/var/lib/kubelet/pods/1accd238-8dda-4882-b66b-96aefeb84df4/volumes" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.873738 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718817e7-7114-4473-84e7-56349b861c3e" path="/var/lib/kubelet/pods/718817e7-7114-4473-84e7-56349b861c3e/volumes" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.496465 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497680 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497698 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497715 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="init" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497722 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="init" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497739 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497746 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497944 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497961 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.499313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.507379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.558853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.598667 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.600238 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605084 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605492 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f596n" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605779 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.607453 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.636699 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.660959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661590 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.662970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.663475 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.663824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.664322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.664341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.706317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.763522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.763652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764097 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.768686 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.769058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.771755 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.782344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.784555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.829374 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.935953 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: W0109 11:05:35.965268 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0333d9ce_e537_4702_9180_533644b70869.slice/crio-12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967 WatchSource:0}: Error finding container 12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967: Status 404 returned error can't find the container with id 12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967 Jan 09 11:05:35 crc kubenswrapper[4727]: W0109 11:05:35.973667 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89031be7_ef50_45c8_b43f_b34f66012f21.slice/crio-8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a WatchSource:0}: Error finding container 8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a: Status 404 returned error can't find the container with id 8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.987883 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.988093 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zk2mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5c72l_openstack(5f7de868-87b0-49c7-ad5e-7c528f181550): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.989449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5c72l" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.012440 4727 scope.go:117] "RemoveContainer" containerID="23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.736016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"f359bb60ecb5049a25ef11d10b22c031018c3de4d2dffb82f605df54479897f8"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.739770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.742243 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.753154 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerStarted","Data":"2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0"} Jan 09 11:05:36 crc kubenswrapper[4727]: E0109 11:05:36.755708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5c72l" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.766626 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.018011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.785141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerStarted","Data":"8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.792786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.800815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerStarted","Data":"8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.820691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"7b19e08e51c2187c9b787539a3d10f06721b0c9cd5e9e0ca48804bb7f658a9cf"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.825850 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pss24" podStartSLOduration=5.034709698 podStartE2EDuration="39.825791721s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.581713445 +0000 UTC m=+1147.031618226" lastFinishedPulling="2026-01-09 11:05:36.372795468 +0000 UTC m=+1181.822700249" observedRunningTime="2026-01-09 11:05:37.810223481 +0000 UTC m=+1183.260128272" watchObservedRunningTime="2026-01-09 11:05:37.825791721 +0000 UTC m=+1183.275696502" Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.833209 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.839595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.849701 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"0352e91e3b6f8f354549c2a614d9810f2ab2a775ae1cfdf255339394fd79299c"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.851521 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-56tkr" podStartSLOduration=7.621451488 podStartE2EDuration="39.851478544s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.613997341 +0000 UTC m=+1147.063902122" lastFinishedPulling="2026-01-09 11:05:33.844024397 +0000 UTC m=+1179.293929178" observedRunningTime="2026-01-09 11:05:37.832973995 +0000 UTC m=+1183.282878776" watchObservedRunningTime="2026-01-09 11:05:37.851478544 +0000 UTC m=+1183.301383325" Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.856891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerStarted","Data":"84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888024 4727 generic.go:334] "Generic (PLEG): container finished" podID="4862f781-5a00-439d-94b4-f717ce6324a2" containerID="fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835" exitCode=0 Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerStarted","Data":"63091b70999aa18980c69d6d71c9c1317a8afc30e821bca924a95d321d78761c"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.912147 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nd4pq" podStartSLOduration=25.912123993 podStartE2EDuration="25.912123993s" podCreationTimestamp="2026-01-09 11:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:37.906958393 +0000 UTC m=+1183.356863174" watchObservedRunningTime="2026-01-09 11:05:37.912123993 +0000 UTC m=+1183.362028774" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.073481 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.075497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.079664 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.080742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132522 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.167260 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.240933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.253598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.254770 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.255627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.258495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.267123 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.272323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.494279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.929866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.954395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"c96756fd46cc12528b047fad2396bce2cf5d57a6749b34484d3494f0b5561760"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.976840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerStarted","Data":"4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.977352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.977416 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cbf5cf75b-vwxrh" podStartSLOduration=31.449320988 podStartE2EDuration="31.977400987s" podCreationTimestamp="2026-01-09 11:05:07 +0000 UTC" firstStartedPulling="2026-01-09 11:05:35.965657795 +0000 UTC m=+1181.415562576" lastFinishedPulling="2026-01-09 11:05:36.493737794 +0000 UTC m=+1181.943642575" observedRunningTime="2026-01-09 11:05:38.96234532 +0000 UTC m=+1184.412250131" watchObservedRunningTime="2026-01-09 11:05:38.977400987 +0000 UTC m=+1184.427305788" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.980125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.002272 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57c89666d8-8fhd6" podStartSLOduration=31.202293306 podStartE2EDuration="32.002246807s" podCreationTimestamp="2026-01-09 11:05:07 +0000 UTC" firstStartedPulling="2026-01-09 11:05:35.994530524 +0000 UTC m=+1181.444435295" lastFinishedPulling="2026-01-09 11:05:36.794484015 +0000 UTC m=+1182.244388796" observedRunningTime="2026-01-09 11:05:38.995178926 +0000 UTC m=+1184.445083707" watchObservedRunningTime="2026-01-09 11:05:39.002246807 +0000 UTC m=+1184.452151608" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030885 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.062479 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podStartSLOduration=4.062458583 podStartE2EDuration="4.062458583s" podCreationTimestamp="2026-01-09 11:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:39.031042275 +0000 UTC m=+1184.480947076" watchObservedRunningTime="2026-01-09 11:05:39.062458583 +0000 UTC m=+1184.512363364" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.071270 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bdfc77c64-cjzlr" podStartSLOduration=4.0712464409999995 podStartE2EDuration="4.071246441s" podCreationTimestamp="2026-01-09 11:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:39.061800246 +0000 UTC m=+1184.511705057" watchObservedRunningTime="2026-01-09 11:05:39.071246441 +0000 UTC m=+1184.521151222" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.233091 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:39 crc kubenswrapper[4727]: W0109 11:05:39.238994 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod434346b3_08dc_43a6_aed9_3c00672c0c35.slice/crio-fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac WatchSource:0}: Error finding container fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac: Status 404 returned error can't find the container with id fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.405374 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.405437 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.039157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.045128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.049012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.083208 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.083175624 podStartE2EDuration="32.083175624s" podCreationTimestamp="2026-01-09 11:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:40.067035929 +0000 UTC m=+1185.516940720" watchObservedRunningTime="2026-01-09 11:05:40.083175624 +0000 UTC m=+1185.533080405" Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.104535 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.10447494 podStartE2EDuration="31.10447494s" podCreationTimestamp="2026-01-09 11:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:40.09893501 +0000 UTC m=+1185.548839801" watchObservedRunningTime="2026-01-09 11:05:40.10447494 +0000 UTC m=+1185.554379731" Jan 09 11:05:41 crc kubenswrapper[4727]: I0109 11:05:41.070521 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"1276a8edafd070711298dd9ee6f8a38bb57278e90a13eca9c8ccbb2e3e5d6729"} Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.099408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"1d1aef94470a805f45904c85d0b95ad7dd7b81e684e648b2bb2867bb3d32604d"} Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.100298 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.126046 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8db497957-k8d9r" podStartSLOduration=4.126024614 podStartE2EDuration="4.126024614s" podCreationTimestamp="2026-01-09 11:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:42.117781933 +0000 UTC m=+1187.567686734" watchObservedRunningTime="2026-01-09 11:05:42.126024614 +0000 UTC m=+1187.575929405" Jan 09 11:05:42 crc kubenswrapper[4727]: E0109 11:05:42.802454 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.113330 4727 generic.go:334] "Generic (PLEG): container finished" podID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerID="84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6" exitCode=0 Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.113535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerDied","Data":"84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6"} Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.117200 4727 generic.go:334] "Generic (PLEG): container finished" podID="790d27d6-9817-413b-b711-f0be91104704" containerID="8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff" exitCode=0 Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.117265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerDied","Data":"8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff"} Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.136533 4727 generic.go:334] "Generic (PLEG): container finished" podID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerID="8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617" exitCode=0 Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.136645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerDied","Data":"8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617"} Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.830916 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.912010 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.912436 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" containerID="cri-o://0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" gracePeriod=10 Jan 09 11:05:46 crc kubenswrapper[4727]: I0109 11:05:46.159785 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerID="0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" exitCode=0 Jan 09 11:05:46 crc kubenswrapper[4727]: I0109 11:05:46.159821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc"} Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.715529 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.724190 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.769677 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.873967 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894603 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894718 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894738 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894803 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894832 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894990 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895018 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895119 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895145 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.902838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts" (OuterVolumeSpecName: "scripts") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.914311 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs" (OuterVolumeSpecName: "logs") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.915774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j" (OuterVolumeSpecName: "kube-api-access-6tq6j") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "kube-api-access-6tq6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.921846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2" (OuterVolumeSpecName: "kube-api-access-n92p2") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "kube-api-access-n92p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.924474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.934318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.940665 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.941920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz" (OuterVolumeSpecName: "kube-api-access-bnppz") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "kube-api-access-bnppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.942036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts" (OuterVolumeSpecName: "scripts") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.948630 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.964660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data" (OuterVolumeSpecName: "config-data") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.968768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data" (OuterVolumeSpecName: "config-data") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.992025 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997167 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997647 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998222 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998245 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998265 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998277 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998288 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998299 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998310 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998321 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998331 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998341 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998352 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998363 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998374 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.000953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.008087 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc" (OuterVolumeSpecName: "kube-api-access-g95hc") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "kube-api-access-g95hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.010995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.011040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.013341 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.049967 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.052877 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config" (OuterVolumeSpecName: "config") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.054830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.063820 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.073755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103396 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103458 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103477 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103492 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103533 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103549 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103562 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.160069 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.160672 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.161201 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57c89666d8-8fhd6" podUID="89031be7-ef50-45c8-b43f-b34f66012f21" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187223 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerDied","Data":"22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187271 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187329 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerDied","Data":"2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200141 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200231 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.205853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217127 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"9f4d6e1e84339b6e76c479a4901b4c944d69b816e4b882b5ef6e50a8f5fbe884"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217780 4727 scope.go:117] "RemoveContainer" containerID="0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerDied","Data":"feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229202 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229348 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.259722 4727 scope.go:117] "RemoveContainer" containerID="0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.284109 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.297097 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.683842 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.683912 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.750089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.751278 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.873019 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" path="/var/lib/kubelet/pods/bf11a72b-70ce-401b-aed0-21ce9c1fcf71/volumes" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.902978 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903438 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903457 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903475 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="init" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903483 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="init" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903498 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903528 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903543 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903549 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903560 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903783 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903830 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903850 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903860 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.904599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909547 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909958 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.910270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.910419 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.912151 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.998938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.021473 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.023358 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024453 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024644 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040418 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040456 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hx5p2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040813 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.043906 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.069553 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128518 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138441 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138891 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.151881 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.153759 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.159263 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175585 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zbdpv" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175665 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176748 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.177385 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.184180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.221985 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.223685 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.227999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.228634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.229384 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.250828 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251454 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251527 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252733 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252759 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.253421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.261827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.262594 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.264154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.286976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.304503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.312171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.314903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.314977 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354492 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354602 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.358079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.358874 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.363627 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.365294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.370226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.375284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.385603 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.400644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.443352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.445823 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.447681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.473136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.473825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.474972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.489072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.494489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.495975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.489308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.531310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.544183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.571063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605196 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.628865 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.628938 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.640727 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.644745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.664369 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.664383 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.707761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.727710 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.756158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.765704 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.768212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.769384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.783121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.786881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.798121 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.841908 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.855599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866924 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866971 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.867000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.902963 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970805 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.971072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.972021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.974550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.988212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.989244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.996596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.032308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.169497 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.270656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.428751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-666857844b-c2hp6" event={"ID":"3738e7aa-d182-43a0-962c-b735526851f2","Type":"ContainerStarted","Data":"257f29c478f2f77d8ab87459adef8e54d8f9120fb2557bda6c875b56f4b692c0"} Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.430425 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.430447 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.516499 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.694181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.732088 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.969422 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.101962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.460881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-666857844b-c2hp6" event={"ID":"3738e7aa-d182-43a0-962c-b735526851f2","Type":"ContainerStarted","Data":"e9b5249158ce3c47b8a9559ecd7b24c7cd40e97071bbb03a35b034f4a8af741b"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.461627 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"51e102ab73001250171aeac8da56c6cd9138cc4141d819cd5dcfd8ccd9ccc759"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"f4462cb2c6255c4b2ca225a032cf1f4564d40101cd3c020b2b0cb2a26b3e0ac3"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"38f806d77cf5116373985d1661bba0c86d97111858ff7ddbbd1805becd8aa786"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.477391 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.477425 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.485270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"8c38992963841723c9d58d1535b601e9e02c8e7703f2e2b913e36a9b1392ce64"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.496664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"5bc7bb7ac89ce392430ac7e65ff0eb04ba2048df225717424e45329a79f0c64a"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.506370 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-666857844b-c2hp6" podStartSLOduration=3.506338579 podStartE2EDuration="3.506338579s" podCreationTimestamp="2026-01-09 11:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:51.489251968 +0000 UTC m=+1196.939156769" watchObservedRunningTime="2026-01-09 11:05:51.506338579 +0000 UTC m=+1196.956243360" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518807 4727 generic.go:334] "Generic (PLEG): container finished" podID="c987342c-3221-479b-9298-cdf7c85e22cd" containerID="976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da" exitCode=0 Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518908 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerStarted","Data":"2fc1bd7230fec540cd4a334d07ebbdb4b06f434463e354143dc267a731f76be2"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.532653 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85c4f6b76d-7zrx8" podStartSLOduration=3.532625069 podStartE2EDuration="3.532625069s" podCreationTimestamp="2026-01-09 11:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:51.513077851 +0000 UTC m=+1196.962982642" watchObservedRunningTime="2026-01-09 11:05:51.532625069 +0000 UTC m=+1196.982529850" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.556816 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.556835 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.564184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"322485e94d04b07a0628480ea332d510b19bc2e880861e5537ca397a02f6be32"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.392870 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.395885 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.404131 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.404422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.413516 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470358 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470568 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470623 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573710 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573988 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.574046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.577686 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.587987 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.588423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.590028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.590092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.591318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.591807 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.597322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.598052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.601728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.615572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerStarted","Data":"7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.617332 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.617371 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.619243 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.642540 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podStartSLOduration=3.642493557 podStartE2EDuration="3.642493557s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:52.628225312 +0000 UTC m=+1198.078130093" watchObservedRunningTime="2026-01-09 11:05:52.642493557 +0000 UTC m=+1198.092398358" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.662643 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podStartSLOduration=3.662623611 podStartE2EDuration="3.662623611s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:52.662050185 +0000 UTC m=+1198.111954976" watchObservedRunningTime="2026-01-09 11:05:52.662623611 +0000 UTC m=+1198.112528392" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.722264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.722484 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.724743 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.754109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:53 crc kubenswrapper[4727]: E0109 11:05:53.066329 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.494863 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.495684 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.631947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerStarted","Data":"3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4"} Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.632676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.664427 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5c72l" podStartSLOduration=5.278280011 podStartE2EDuration="55.66439648s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.639200844 +0000 UTC m=+1147.089105635" lastFinishedPulling="2026-01-09 11:05:52.025317323 +0000 UTC m=+1197.475222104" observedRunningTime="2026-01-09 11:05:53.649617831 +0000 UTC m=+1199.099522622" watchObservedRunningTime="2026-01-09 11:05:53.66439648 +0000 UTC m=+1199.114301261" Jan 09 11:05:54 crc kubenswrapper[4727]: I0109 11:05:54.886163 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.676961 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"cd27e0c259a7636ad03573906691b98d7c85ce2fc932733052406dc3d928b297"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"734fa0136e05f8beda10d4f9902f1c2c1bb5e6f7f274719e7505e85690430187"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"afaf8224bc057f25ff4fcfa18b1facecd43324f8ea9d02f371f078902fe74684"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677049 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.679068 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"dce98ec6c97926cab4955d81108e3efa253aa7aac5a89692c0a5f350ce898868"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.679117 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"b0ce95c84516095056682d41fc1627e7bc2a93ae506e1ccc59847e696dee4555"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.698415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"d07943b7feb486e00606a9c38566812b04b5b34da0b212acbacd4649165f14a7"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.698465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"aec1b8c4d148dfd7407c7dce49b35ddf868dfc11e52729dcfd894bb065394bd4"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.715028 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5456d7bfcd-5bs8c" podStartSLOduration=3.71500706 podStartE2EDuration="3.71500706s" podCreationTimestamp="2026-01-09 11:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:55.70794865 +0000 UTC m=+1201.157853461" watchObservedRunningTime="2026-01-09 11:05:55.71500706 +0000 UTC m=+1201.164911831" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.741979 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" podStartSLOduration=3.174287705 podStartE2EDuration="6.741953528s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="2026-01-09 11:05:50.725652519 +0000 UTC m=+1196.175557300" lastFinishedPulling="2026-01-09 11:05:54.293318342 +0000 UTC m=+1199.743223123" observedRunningTime="2026-01-09 11:05:55.723065978 +0000 UTC m=+1201.172970779" watchObservedRunningTime="2026-01-09 11:05:55.741953528 +0000 UTC m=+1201.191858309" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.777016 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" podStartSLOduration=3.226642518 podStartE2EDuration="6.776990444s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="2026-01-09 11:05:50.743621064 +0000 UTC m=+1196.193525845" lastFinishedPulling="2026-01-09 11:05:54.29396899 +0000 UTC m=+1199.743873771" observedRunningTime="2026-01-09 11:05:55.74425078 +0000 UTC m=+1201.194155561" watchObservedRunningTime="2026-01-09 11:05:55.776990444 +0000 UTC m=+1201.226895225" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.011500 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.160144 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57c89666d8-8fhd6" podUID="89031be7-ef50-45c8-b43f-b34f66012f21" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.731243 4727 generic.go:334] "Generic (PLEG): container finished" podID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerID="3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4" exitCode=0 Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.731336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerDied","Data":"3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4"} Jan 09 11:05:59 crc kubenswrapper[4727]: I0109 11:05:59.977952 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.052887 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.053125 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" containerID="cri-o://4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" gracePeriod=10 Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.788623 4727 generic.go:334] "Generic (PLEG): container finished" podID="4862f781-5a00-439d-94b4-f717ce6324a2" containerID="4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" exitCode=0 Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.788690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3"} Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.830274 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.746003 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerDied","Data":"a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb"} Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822229 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822242 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924436 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924561 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.925990 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.937326 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg" (OuterVolumeSpecName: "kube-api-access-zk2mg") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "kube-api-access-zk2mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.940824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.941080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts" (OuterVolumeSpecName: "scripts") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.960742 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.004066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data" (OuterVolumeSpecName: "config-data") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028706 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028744 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028763 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028779 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028791 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.035908 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:06:02 crc kubenswrapper[4727]: E0109 11:06:02.095271 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129281 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129573 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129612 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.139660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45" (OuterVolumeSpecName: "kube-api-access-fkh45") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "kube-api-access-fkh45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.187076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.192019 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config" (OuterVolumeSpecName: "config") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.197127 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.206696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.219924 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232089 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232152 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232166 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232176 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232208 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232218 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.449050 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835779 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"63091b70999aa18980c69d6d71c9c1317a8afc30e821bca924a95d321d78761c"} Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835982 4727 scope.go:117] "RemoveContainer" containerID="4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.840649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688"} Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.841686 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" containerID="cri-o://e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842621 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842614 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" containerID="cri-o://bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842728 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" containerID="cri-o://95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.900631 4727 scope.go:117] "RemoveContainer" containerID="fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.906650 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.950743 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.309340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310094 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310188 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310267 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="init" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310323 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="init" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310392 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310447 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310756 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310846 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.312129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319198 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319423 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fql5g" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319558 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.345960 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.525791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526143 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.539205 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3179052d_0a48_4988_9696_814faeb20563.slice/crio-conmon-bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.549591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.583421 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.594655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.604060 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.606427 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.612082 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.616924 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660953 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660983 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661061 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661196 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661216 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661353 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.691072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.692992 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.703038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.703791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.704799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767656 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767729 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767789 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767896 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.768374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.778692 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.778785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.786338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.787336 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.790649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.809709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.875858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.875962 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876015 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876107 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.877737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.880584 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.905664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.907942 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" exitCode=0 Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.907990 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" exitCode=2 Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.908020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688"} Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.908058 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451"} Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.929166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.958334 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.999926 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.025269 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.657067 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.726570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.955212 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" path="/var/lib/kubelet/pods/4862f781-5a00-439d-94b4-f717ce6324a2/volumes" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.956642 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.991441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"46e0819a2a4dd76f55beafd0dd463399c99fccea0ca8d438850be56e9391306d"} Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.994700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"ce83cb6536bed5f69863a9bc02f546d105aa9cecf6f79078fbb71dfb9bf0d4f6"} Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.999744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerStarted","Data":"1fc9e9988fd4856268dac8faebd8ec23ba321d236e5bf07d0594fdfe44867d1e"} Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.246825 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.344585 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.411808 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492199 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492421 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" containerID="cri-o://087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" gracePeriod=30 Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492935 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" containerID="cri-o://3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" gracePeriod=30 Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.501596 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.958818 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.038287 4727 generic.go:334] "Generic (PLEG): container finished" podID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerID="40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19" exitCode=0 Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.038413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19"} Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.091925 4727 generic.go:334] "Generic (PLEG): container finished" podID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerID="087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" exitCode=143 Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.091988 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68"} Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.095420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131736 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" containerID="cri-o://89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" gracePeriod=30 Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131966 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" containerID="cri-o://4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" gracePeriod=30 Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.132098 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.140727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.144492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerStarted","Data":"8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.144898 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.162174 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.162151683 podStartE2EDuration="4.162151683s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:07.157796895 +0000 UTC m=+1212.607701676" watchObservedRunningTime="2026-01-09 11:06:07.162151683 +0000 UTC m=+1212.612056464" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.192170 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" podStartSLOduration=4.192148523 podStartE2EDuration="4.192148523s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:07.185915504 +0000 UTC m=+1212.635820305" watchObservedRunningTime="2026-01-09 11:06:07.192148523 +0000 UTC m=+1212.642053304" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.219326 4727 generic.go:334] "Generic (PLEG): container finished" podID="3d0f92bc-9d54-4382-b822-064c339799c4" containerID="89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" exitCode=143 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.219402 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.278983 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" exitCode=0 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.279354 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.298618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.341133 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.256574424 podStartE2EDuration="5.341108917s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="2026-01-09 11:06:04.895760257 +0000 UTC m=+1210.345665038" lastFinishedPulling="2026-01-09 11:06:05.98029475 +0000 UTC m=+1211.430199531" observedRunningTime="2026-01-09 11:06:08.340544091 +0000 UTC m=+1213.790448882" watchObservedRunningTime="2026-01-09 11:06:08.341108917 +0000 UTC m=+1213.791013698" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.437976 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.515299 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522843 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523028 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.524022 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.530992 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts" (OuterVolumeSpecName: "scripts") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.534213 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746" (OuterVolumeSpecName: "kube-api-access-p6746") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "kube-api-access-p6746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.586063 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.607427 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.607798 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bdfc77c64-cjzlr" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" containerID="cri-o://be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" gracePeriod=30 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.608615 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bdfc77c64-cjzlr" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" containerID="cri-o://69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" gracePeriod=30 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627925 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627956 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627968 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627976 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627987 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.668707 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.672823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data" (OuterVolumeSpecName: "config-data") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.731829 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.731870 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.004180 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.313534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb"} Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.313632 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.315523 4727 scope.go:117] "RemoveContainer" containerID="95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.317447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd"} Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.317440 4727 generic.go:334] "Generic (PLEG): container finished" podID="29996e65-8eab-4604-a8ca-cac1063478fd" containerID="69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" exitCode=0 Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.338574 4727 scope.go:117] "RemoveContainer" containerID="bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.365866 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.375065 4727 scope.go:117] "RemoveContainer" containerID="e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.404972 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408579 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408650 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408725 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.409774 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.409847 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" gracePeriod=600 Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429018 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429710 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429732 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429767 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429775 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429792 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429799 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430020 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430031 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430058 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.432173 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.436032 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.437460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.437953 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558656 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558815 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.559228 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661674 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.662696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.662742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.668489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.670951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.671470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.671536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.687221 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.764043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.948695 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35426->10.217.0.163:9311: read: connection reset by peer" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.322742 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375665 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" exitCode=0 Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375903 4727 scope.go:117] "RemoveContainer" containerID="d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.390479 4727 generic.go:334] "Generic (PLEG): container finished" podID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerID="3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" exitCode=0 Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.390787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.429732 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.512676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.647080 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.720959 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721047 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721312 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.722729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs" (OuterVolumeSpecName: "logs") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.732287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.734771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb" (OuterVolumeSpecName: "kube-api-access-wwhzb") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "kube-api-access-wwhzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.768784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.786232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data" (OuterVolumeSpecName: "config-data") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823889 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823936 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823948 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823958 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823968 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.871316 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3179052d-0a48-4988-9696-814faeb20563" path="/var/lib/kubelet/pods/3179052d-0a48-4988-9696-814faeb20563/volumes" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.409288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.409684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"c35551f5fd2325dd8ded3e2242e43e59a4eeb9e347df7aa845f106c0ffc6e15c"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411533 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"5bc7bb7ac89ce392430ac7e65ff0eb04ba2048df225717424e45329a79f0c64a"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411618 4727 scope.go:117] "RemoveContainer" containerID="3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.441574 4727 scope.go:117] "RemoveContainer" containerID="087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.442930 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.466338 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.163720 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.389389 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.429691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.477816 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.478114 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" containerID="cri-o://d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" gracePeriod=30 Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.478682 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" containerID="cri-o://7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" gracePeriod=30 Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.872048 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" path="/var/lib/kubelet/pods/7283b7d5-d972-4c78-ac33-72488eedabf2/volumes" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.445232 4727 generic.go:334] "Generic (PLEG): container finished" podID="29996e65-8eab-4604-a8ca-cac1063478fd" containerID="be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" exitCode=0 Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.445314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717"} Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.448033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.867709 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.932068 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.999867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:13.999971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009405 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009674 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" containerID="cri-o://7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" gracePeriod=10 Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009078 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.021592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l" (OuterVolumeSpecName: "kube-api-access-mz54l") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "kube-api-access-mz54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.085483 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.094385 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config" (OuterVolumeSpecName: "config") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102847 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102881 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102893 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102903 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.160176 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.205489 4727 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.039267 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.046189 4727 generic.go:334] "Generic (PLEG): container finished" podID="c987342c-3221-479b-9298-cdf7c85e22cd" containerID="7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" exitCode=0 Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.046271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8"} Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.049947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"7b19e08e51c2187c9b787539a3d10f06721b0c9cd5e9e0ca48804bb7f658a9cf"} Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.050032 4727 scope.go:117] "RemoveContainer" containerID="69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.050378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.113076 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.121849 4727 scope.go:117] "RemoveContainer" containerID="be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.129149 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.155146 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.272755 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.273332 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.303732 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348408 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348563 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348659 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.373669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5" (OuterVolumeSpecName: "kube-api-access-ttbs5") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "kube-api-access-ttbs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.418204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config" (OuterVolumeSpecName: "config") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.429230 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.429792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.434044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.437109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457033 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457067 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457078 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457089 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457099 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457107 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.063307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.063761 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.065855 4727 generic.go:334] "Generic (PLEG): container finished" podID="bddc5542-122d-4606-a57a-8830398a4c93" containerID="7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" exitCode=0 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.065963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068344 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"2fc1bd7230fec540cd4a334d07ebbdb4b06f434463e354143dc267a731f76be2"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068473 4727 scope.go:117] "RemoveContainer" containerID="7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.070652 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" containerID="cri-o://bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" gracePeriod=30 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.071063 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" containerID="cri-o://f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" gracePeriod=30 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.101298 4727 scope.go:117] "RemoveContainer" containerID="976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.110177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.509784367 podStartE2EDuration="7.110155764s" podCreationTimestamp="2026-01-09 11:06:09 +0000 UTC" firstStartedPulling="2026-01-09 11:06:10.463653938 +0000 UTC m=+1215.913558719" lastFinishedPulling="2026-01-09 11:06:14.064025335 +0000 UTC m=+1219.513930116" observedRunningTime="2026-01-09 11:06:16.09260066 +0000 UTC m=+1221.542505461" watchObservedRunningTime="2026-01-09 11:06:16.110155764 +0000 UTC m=+1221.560060545" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.128190 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.130914 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.295614 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.873734 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" path="/var/lib/kubelet/pods/29996e65-8eab-4604-a8ca-cac1063478fd/volumes" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.874609 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" path="/var/lib/kubelet/pods/c987342c-3221-479b-9298-cdf7c85e22cd/volumes" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.012115 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.104566 4727 generic.go:334] "Generic (PLEG): container finished" podID="c5f4cf4a-501a-4881-b395-2740657333d5" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" exitCode=0 Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.104639 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.834633 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953982 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954055 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954217 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.960767 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v" (OuterVolumeSpecName: "kube-api-access-gd69v") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "kube-api-access-gd69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.961771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.967498 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts" (OuterVolumeSpecName: "scripts") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.027235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056460 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056529 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056543 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056555 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056568 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.076150 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data" (OuterVolumeSpecName: "config-data") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119198 4727 generic.go:334] "Generic (PLEG): container finished" podID="c5f4cf4a-501a-4881-b395-2740657333d5" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" exitCode=0 Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119295 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119343 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"ce83cb6536bed5f69863a9bc02f546d105aa9cecf6f79078fbb71dfb9bf0d4f6"} Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119364 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119383 4727 scope.go:117] "RemoveContainer" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.156973 4727 scope.go:117] "RemoveContainer" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.166206 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.209551 4727 scope.go:117] "RemoveContainer" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.210307 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": container with ID starting with f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c not found: ID does not exist" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.210352 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} err="failed to get container status \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": rpc error: code = NotFound desc = could not find container \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": container with ID starting with f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c not found: ID does not exist" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.210403 4727 scope.go:117] "RemoveContainer" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.211317 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": container with ID starting with bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e not found: ID does not exist" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.211457 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} err="failed to get container status \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": rpc error: code = NotFound desc = could not find container \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": container with ID starting with bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e not found: ID does not exist" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.212585 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.229952 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242154 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242910 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242943 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242957 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242968 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242993 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243002 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243022 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="init" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243031 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="init" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243050 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243057 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243073 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243085 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243099 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243124 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243130 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243362 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243380 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243388 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243396 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243405 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243416 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243428 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.244833 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.247907 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.252326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.369804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370408 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.472931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473061 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.474666 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.483367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.483629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.494457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.497960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.498499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.564105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.997076 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: i/o timeout" Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.134342 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:20 crc kubenswrapper[4727]: W0109 11:06:20.172063 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69c5def_7abe_4486_b548_323e0416cc83.slice/crio-81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297 WatchSource:0}: Error finding container 81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297: Status 404 returned error can't find the container with id 81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297 Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.875371 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" path="/var/lib/kubelet/pods/c5f4cf4a-501a-4881-b395-2740657333d5/volumes" Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.922618 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.043467 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.174490 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"273137fd08b7f1df78b4a23bef04f558ece73ad6f5655c66e7f859b7ee230afb"} Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.174544 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297"} Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.555104 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.184951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"e35bc1604915387393e7d7f12e6fe1533c0eeb1d5e802af5e018550ce8db9c88"} Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.217254 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.217228555 podStartE2EDuration="3.217228555s" podCreationTimestamp="2026-01-09 11:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:22.20109193 +0000 UTC m=+1227.650996721" watchObservedRunningTime="2026-01-09 11:06:22.217228555 +0000 UTC m=+1227.667133346" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.235643 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.237209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.239552 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.239564 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wdrq9" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.241795 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.249929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444160 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444223 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.446854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.453425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.454573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.464030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.555352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.625547 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.658411 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.696577 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.698299 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.706745 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.758088 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.758544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.759988 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.760163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: E0109 11:06:22.770597 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 09 11:06:22 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9144eabf-83b9-49a6-a047-b2606a68d1a7_0(bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180" Netns:"/var/run/netns/cc6d7f27-05af-4c25-a0f4-4bd76583f251" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180;K8S_POD_UID=9144eabf-83b9-49a6-a047-b2606a68d1a7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9144eabf-83b9-49a6-a047-b2606a68d1a7]: expected pod UID "9144eabf-83b9-49a6-a047-b2606a68d1a7" but got "06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" from Kube API Jan 09 11:06:22 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 09 11:06:22 crc kubenswrapper[4727]: > Jan 09 11:06:22 crc kubenswrapper[4727]: E0109 11:06:22.770683 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 09 11:06:22 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9144eabf-83b9-49a6-a047-b2606a68d1a7_0(bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180" Netns:"/var/run/netns/cc6d7f27-05af-4c25-a0f4-4bd76583f251" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180;K8S_POD_UID=9144eabf-83b9-49a6-a047-b2606a68d1a7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9144eabf-83b9-49a6-a047-b2606a68d1a7]: expected pod UID "9144eabf-83b9-49a6-a047-b2606a68d1a7" but got "06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" from Kube API Jan 09 11:06:22 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 09 11:06:22 crc kubenswrapper[4727]: > pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.861576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862487 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.867166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.868332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.888220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.022877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.218192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.235408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.240692 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" podUID="06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378393 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378458 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.380861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.387876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.389874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.391000 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5" (OuterVolumeSpecName: "kube-api-access-sb9h5") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "kube-api-access-sb9h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481657 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481712 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481734 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481755 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.594872 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.227567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4","Type":"ContainerStarted","Data":"9a59f1ab7d8270687e705ab8bfbfccd195336251e0f5de2cb74edb7519ad8495"} Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.227592 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.250251 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" podUID="06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.564549 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.877900 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" path="/var/lib/kubelet/pods/9144eabf-83b9-49a6-a047-b2606a68d1a7/volumes" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.868113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.873144 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.876699 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.877155 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.877354 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.899478 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.947691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.948021 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.948537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.949452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053108 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053156 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.054621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061400 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.062019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.070916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.072004 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.243189 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:27 crc kubenswrapper[4727]: I0109 11:06:27.094103 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:27 crc kubenswrapper[4727]: W0109 11:06:27.110395 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d5b74a_ef5f_4cb2_b043_e56bb3cbfcdb.slice/crio-eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef WatchSource:0}: Error finding container eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef: Status 404 returned error can't find the container with id eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef Jan 09 11:06:27 crc kubenswrapper[4727]: I0109 11:06:27.301018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.011750 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.162883 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.163132 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" containerID="cri-o://a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.163599 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" containerID="cri-o://a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.306723 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.307703 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" containerID="cri-o://3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.309887 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" containerID="cri-o://81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.309974 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" containerID="cri-o://f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.310058 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" containerID="cri-o://e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.330165 4727 generic.go:334] "Generic (PLEG): container finished" podID="0333d9ce-e537-4702-9180-533644b70869" containerID="a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" exitCode=143 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.330299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"314d2449db889e5f19208d3bb30746c0b32b087176095f8faf4c4cf733675cba"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"e12177246f9b22e39ccd4c29aa339a7926b0fe33886539a7d6b07bcb8eb8a1f8"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343695 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.377727 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67d6487995-f424z" podStartSLOduration=3.377698028 podStartE2EDuration="3.377698028s" podCreationTimestamp="2026-01-09 11:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:28.36925202 +0000 UTC m=+1233.819156811" watchObservedRunningTime="2026-01-09 11:06:28.377698028 +0000 UTC m=+1233.827602819" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.413335 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": read tcp 10.217.0.2:43452->10.217.0.168:3000: read: connection reset by peer" Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366780 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" exitCode=0 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366859 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" exitCode=2 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366872 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" exitCode=0 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368782 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.874480 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 11:06:32 crc kubenswrapper[4727]: I0109 11:06:32.401822 4727 generic.go:334] "Generic (PLEG): container finished" podID="0333d9ce-e537-4702-9180-533644b70869" containerID="a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" exitCode=0 Jan 09 11:06:32 crc kubenswrapper[4727]: I0109 11:06:32.401883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.137586 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.275563 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.283849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.283970 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284091 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284123 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284210 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.285277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.285906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.290587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts" (OuterVolumeSpecName: "scripts") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.290596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m" (OuterVolumeSpecName: "kube-api-access-g9s2m") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "kube-api-access-g9s2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.339049 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385807 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385890 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386465 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386485 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386495 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386522 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386531 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.387696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs" (OuterVolumeSpecName: "logs") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.398338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts" (OuterVolumeSpecName: "scripts") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.402615 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.416041 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj" (OuterVolumeSpecName: "kube-api-access-5x8gj") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "kube-api-access-5x8gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.422819 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.429765 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.443835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data" (OuterVolumeSpecName: "config-data") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461070 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" exitCode=0 Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"c35551f5fd2325dd8ded3e2242e43e59a4eeb9e347df7aa845f106c0ffc6e15c"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461272 4727 scope.go:117] "RemoveContainer" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.464635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.464845 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.467830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4","Type":"ContainerStarted","Data":"f072bbd068468554fe717389c742978c432d67269a53aad5b050c57ccce64416"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.468531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.479660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data" (OuterVolumeSpecName: "config-data") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488913 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488959 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488973 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488985 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488996 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489009 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489022 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489033 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489044 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489118 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.496595 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.276491558 podStartE2EDuration="12.496562237s" podCreationTimestamp="2026-01-09 11:06:22 +0000 UTC" firstStartedPulling="2026-01-09 11:06:23.602281514 +0000 UTC m=+1229.052186305" lastFinishedPulling="2026-01-09 11:06:33.822352203 +0000 UTC m=+1239.272256984" observedRunningTime="2026-01-09 11:06:34.496476255 +0000 UTC m=+1239.946381056" watchObservedRunningTime="2026-01-09 11:06:34.496562237 +0000 UTC m=+1239.946467028" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.499538 4727 scope.go:117] "RemoveContainer" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.527354 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.532470 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.545871 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.554715 4727 scope.go:117] "RemoveContainer" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569151 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569664 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569686 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569707 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569714 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569729 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569736 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569750 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569756 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569767 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569772 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569782 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569788 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569969 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569985 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569997 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570007 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570017 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570030 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.591947 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.593241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.593416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.596048 4727 scope.go:117] "RemoveContainer" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.597287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.597674 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.637174 4727 scope.go:117] "RemoveContainer" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.638004 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": container with ID starting with e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba not found: ID does not exist" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638079 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} err="failed to get container status \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": rpc error: code = NotFound desc = could not find container \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": container with ID starting with e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638124 4727 scope.go:117] "RemoveContainer" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.638738 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": container with ID starting with 81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2 not found: ID does not exist" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638781 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} err="failed to get container status \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": rpc error: code = NotFound desc = could not find container \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": container with ID starting with 81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2 not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638818 4727 scope.go:117] "RemoveContainer" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.639388 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": container with ID starting with f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a not found: ID does not exist" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.639455 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} err="failed to get container status \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": rpc error: code = NotFound desc = could not find container \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": container with ID starting with f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.639496 4727 scope.go:117] "RemoveContainer" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.640008 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": container with ID starting with 3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226 not found: ID does not exist" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.640043 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} err="failed to get container status \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": rpc error: code = NotFound desc = could not find container \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": container with ID starting with 3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226 not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.640060 4727 scope.go:117] "RemoveContainer" containerID="a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.677010 4727 scope.go:117] "RemoveContainer" containerID="a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694130 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694239 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694336 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796442 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.797328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.797604 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.803799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.804102 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.805459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.806501 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.812489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.823629 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.825397 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.839147 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.844603 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.850039 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.850284 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.892205 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0333d9ce-e537-4702-9180-533644b70869" path="/var/lib/kubelet/pods/0333d9ce-e537-4702-9180-533644b70869/volumes" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.896292 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" path="/var/lib/kubelet/pods/38361e01-9ca6-4c45-8b88-809107b70a25/volumes" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.897332 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.922717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.000469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001044 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001080 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001192 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103246 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103566 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103616 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.104289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.104593 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.105737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.127095 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.127364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.136874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.138913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.153423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.179296 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.210579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.421069 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.484122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"53954f674440644c283f7c014b698cd7d333a0f6dbc5a2f2cda29cf21add04ea"} Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.911240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:35 crc kubenswrapper[4727]: W0109 11:06:35.915838 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992ca8ba_ec96_4dc0_9442_464cbdce8afc.slice/crio-840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b WatchSource:0}: Error finding container 840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b: Status 404 returned error can't find the container with id 840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.252982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.254857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.536961 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b"} Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.543083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.557766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"f88012d7a0f6c75360813ec72689391ad9f83cabb290573b047ee7a12474ac10"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.558681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"c690d8102c45dd69803e5c699761883e5c2842c846ee2fc4de64aa59112668c2"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.563309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.574211 4727 generic.go:334] "Generic (PLEG): container finished" podID="3d0f92bc-9d54-4382-b822-064c339799c4" containerID="4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" exitCode=137 Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.574289 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.601373 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.601348423 podStartE2EDuration="3.601348423s" podCreationTimestamp="2026-01-09 11:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:37.593742797 +0000 UTC m=+1243.043647588" watchObservedRunningTime="2026-01-09 11:06:37.601348423 +0000 UTC m=+1243.051253204" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.608048 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656739 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.657022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.659381 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.664116 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs" (OuterVolumeSpecName: "logs") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.668704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.668835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2" (OuterVolumeSpecName: "kube-api-access-8kxn2") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "kube-api-access-8kxn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.671978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts" (OuterVolumeSpecName: "scripts") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.693589 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.742917 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data" (OuterVolumeSpecName: "config-data") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762191 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762242 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762260 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762268 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762277 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762286 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762301 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.011665 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.011857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.247867 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.586956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"46e0819a2a4dd76f55beafd0dd463399c99fccea0ca8d438850be56e9391306d"} Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589926 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589942 4727 scope.go:117] "RemoveContainer" containerID="4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.628080 4727 scope.go:117] "RemoveContainer" containerID="89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.641685 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.670116 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.685640 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: E0109 11:06:38.686231 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: E0109 11:06:38.686289 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686297 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686535 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686568 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.687945 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.696137 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.696147 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.698990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.699698 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789004 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789103 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789231 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.873379 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" path="/var/lib/kubelet/pods/3d0f92bc-9d54-4382-b822-064c339799c4/volumes" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891267 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.893301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.900672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.901199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.901220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.903956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.911039 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.918380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.919051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.022528 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.393683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.617300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"c4ffd9a909de77c8047be5556f7d30d4cd94e6768fd71711bd3e102953c341b5"} Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624423 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" containerID="cri-o://3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624569 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" containerID="cri-o://c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624571 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" containerID="cri-o://88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624614 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" containerID="cri-o://273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624446 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.651765 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9270059430000002 podStartE2EDuration="5.651745246s" podCreationTimestamp="2026-01-09 11:06:34 +0000 UTC" firstStartedPulling="2026-01-09 11:06:35.42362404 +0000 UTC m=+1240.873528811" lastFinishedPulling="2026-01-09 11:06:39.148363333 +0000 UTC m=+1244.598268114" observedRunningTime="2026-01-09 11:06:39.649789414 +0000 UTC m=+1245.099694205" watchObservedRunningTime="2026-01-09 11:06:39.651745246 +0000 UTC m=+1245.101650027" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.826946 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.827438 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" containerID="cri-o://4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.827694 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" containerID="cri-o://cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.836426 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.647410 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"812844ab9b6123b33c634cc42ae56d8c301dbf8002185d55731ae8f14f2e8c13"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.651147 4727 generic.go:334] "Generic (PLEG): container finished" podID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerID="4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" exitCode=143 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.651203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656040 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" exitCode=0 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656072 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" exitCode=2 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656087 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" exitCode=0 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656146 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656241 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.680987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"2a618da410bc97f5f757c7fc9458fd23df5d20e4be11129aefa46d8cf0c996fb"} Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.681545 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.726856 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.726826537 podStartE2EDuration="3.726826537s" podCreationTimestamp="2026-01-09 11:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:41.707069753 +0000 UTC m=+1247.156974554" watchObservedRunningTime="2026-01-09 11:06:41.726826537 +0000 UTC m=+1247.176731318" Jan 09 11:06:42 crc kubenswrapper[4727]: I0109 11:06:42.693578 4727 generic.go:334] "Generic (PLEG): container finished" podID="bddc5542-122d-4606-a57a-8830398a4c93" containerID="d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" exitCode=137 Jan 09 11:06:42 crc kubenswrapper[4727]: I0109 11:06:42.693679 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3"} Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.451902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.532880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.532972 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533066 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533104 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533416 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.535660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs" (OuterVolumeSpecName: "logs") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.552210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.573735 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw" (OuterVolumeSpecName: "kube-api-access-xf4tw") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "kube-api-access-xf4tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.578187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.593684 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data" (OuterVolumeSpecName: "config-data") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.596277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts" (OuterVolumeSpecName: "scripts") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.611499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637200 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637262 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637279 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637298 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637311 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637323 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637334 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.705913 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"f359bb60ecb5049a25ef11d10b22c031018c3de4d2dffb82f605df54479897f8"} Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.705970 4727 scope.go:117] "RemoveContainer" containerID="7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.706108 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.752859 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.779640 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.916238 4727 scope.go:117] "RemoveContainer" containerID="d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.306126 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461553 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461669 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461776 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461994 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462058 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462523 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.463616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.464395 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.464420 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.470475 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts" (OuterVolumeSpecName: "scripts") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.471941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v" (OuterVolumeSpecName: "kube-api-access-tqj2v") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "kube-api-access-tqj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.499235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.560148 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566188 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566227 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566238 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566249 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.621331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data" (OuterVolumeSpecName: "config-data") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.667877 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.732950 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" exitCode=0 Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733092 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"53954f674440644c283f7c014b698cd7d333a0f6dbc5a2f2cda29cf21add04ea"} Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733635 4727 scope.go:117] "RemoveContainer" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.824088 4727 scope.go:117] "RemoveContainer" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.829838 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902199 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddc5542-122d-4606-a57a-8830398a4c93" path="/var/lib/kubelet/pods/bddc5542-122d-4606-a57a-8830398a4c93/volumes" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902878 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902918 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903212 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903244 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903259 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903283 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903302 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903307 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903333 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903339 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903356 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906690 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906713 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906728 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906741 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906750 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906765 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.908584 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.908701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.912741 4727 scope.go:117] "RemoveContainer" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.914467 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.914848 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.960703 4727 scope.go:117] "RemoveContainer" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979128 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.991269 4727 scope.go:117] "RemoveContainer" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.991991 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": container with ID starting with 273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b not found: ID does not exist" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992057 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} err="failed to get container status \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": rpc error: code = NotFound desc = could not find container \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": container with ID starting with 273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992080 4727 scope.go:117] "RemoveContainer" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.992712 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": container with ID starting with 88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b not found: ID does not exist" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992773 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} err="failed to get container status \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": rpc error: code = NotFound desc = could not find container \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": container with ID starting with 88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992818 4727 scope.go:117] "RemoveContainer" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.993191 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": container with ID starting with c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28 not found: ID does not exist" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.993225 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} err="failed to get container status \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": rpc error: code = NotFound desc = could not find container \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": container with ID starting with c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28 not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.993245 4727 scope.go:117] "RemoveContainer" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.994831 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": container with ID starting with 3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54 not found: ID does not exist" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.994910 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} err="failed to get container status \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": rpc error: code = NotFound desc = could not find container \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": container with ID starting with 3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54 not found: ID does not exist" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081650 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.082567 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.082967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.087401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.087999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.089846 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.100313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.104204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.211702 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.212163 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.238645 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.261418 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.273711 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.762088 4727 generic.go:334] "Generic (PLEG): container finished" podID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerID="cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" exitCode=0 Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.762241 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87"} Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.769200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.769242 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.802958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.907401 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.004881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005032 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005585 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006561 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs" (OuterVolumeSpecName: "logs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.007052 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.007068 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.022528 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs" (OuterVolumeSpecName: "kube-api-access-69jrs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "kube-api-access-69jrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.025459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.034661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts" (OuterVolumeSpecName: "scripts") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.109948 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.110059 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.110076 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.142321 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.157485 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data" (OuterVolumeSpecName: "config-data") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.186678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217268 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217322 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217335 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.219792 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.320281 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.778820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.778900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"f2bd9db006208a075f1ffda298772516cf088a891a012e3732a1779dc1575402"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782006 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"64cc505548582ff0b92efe52617ea9736e870feb1d2d85557f334e68ae42a742"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782627 4727 scope.go:117] "RemoveContainer" containerID="cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.827627 4727 scope.go:117] "RemoveContainer" containerID="4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.847157 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.905850 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" path="/var/lib/kubelet/pods/63e9021a-5a0b-4f42-985a-1d3f60e1356f/volumes" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907138 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907182 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: E0109 11:06:46.907549 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907572 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: E0109 11:06:46.907613 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907622 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907889 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907928 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.909473 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.909598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.914804 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.914822 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037030 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037144 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140882 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140919 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140983 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.143278 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.150882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.151935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.164681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.173386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.173775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.179250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.181574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.194720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.241149 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.804579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c"} Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.807591 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.807621 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.937973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.386262 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.824094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"b61826b9b4a0d9c6bb8ec12fb34ee32091915b700e3896f7c3e954de3db94207"} Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.832777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1"} Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.895002 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" path="/var/lib/kubelet/pods/5848a983-5b79-4b20-83bf-aa831b16a3de/volumes" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.943377 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.944023 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.952474 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.849060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"396fbbaa7ae4a192d4bc57f3f2262d2f919b4aa24f7ce2707acdd79f7d97bcdc"} Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.849523 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"d6a30576bbb70208bfe01709850084dc396ed8bd963a51c56eaa24fa9b7e44d5"} Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.881487 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.881461755 podStartE2EDuration="3.881461755s" podCreationTimestamp="2026-01-09 11:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:49.870021797 +0000 UTC m=+1255.319926598" watchObservedRunningTime="2026-01-09 11:06:49.881461755 +0000 UTC m=+1255.331366536" Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863560 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" containerID="cri-o://9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863560 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" containerID="cri-o://e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863597 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" containerID="cri-o://f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863611 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" containerID="cri-o://199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.882423 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.882466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49"} Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.894413 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.011140283 podStartE2EDuration="6.894381376s" podCreationTimestamp="2026-01-09 11:06:44 +0000 UTC" firstStartedPulling="2026-01-09 11:06:45.81779296 +0000 UTC m=+1251.267697741" lastFinishedPulling="2026-01-09 11:06:49.701034053 +0000 UTC m=+1255.150938834" observedRunningTime="2026-01-09 11:06:50.887015767 +0000 UTC m=+1256.336920568" watchObservedRunningTime="2026-01-09 11:06:50.894381376 +0000 UTC m=+1256.344286167" Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.383168 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.878660 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" exitCode=0 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879047 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" exitCode=2 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879060 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" exitCode=0 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.878741 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49"} Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1"} Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c"} Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.066800 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.068093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.111479 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.171975 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.173445 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.179720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.179892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.197439 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.279299 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.280858 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281784 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281814 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.282876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.288905 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.298444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.329254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.383263 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.386895 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.394707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.418538 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.426112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.435373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.492347 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494207 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494375 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494421 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494451 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.496836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.497357 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.500273 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.515116 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.532891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.596456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.596938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.597039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.597206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.598038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.617843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.660246 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.698399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.705090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.707623 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.708440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.709727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.709969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.708888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.713435 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.741798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.815601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.815682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.919577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.919972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.925053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.967830 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.987743 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.043017 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.056558 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:53 crc kubenswrapper[4727]: W0109 11:06:53.072596 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d27352_2f68_4ced_a541_7bbd8bf33fb1.slice/crio-d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2 WatchSource:0}: Error finding container d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2: Status 404 returned error can't find the container with id d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.162688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.317682 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.510197 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:53 crc kubenswrapper[4727]: W0109 11:06:53.696711 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod784df696_fe59_4d64_841e_53fa77ded98f.slice/crio-836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a WatchSource:0}: Error finding container 836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a: Status 404 returned error can't find the container with id 836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.707942 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.724523 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.909781 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerID="c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.909912 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerDied","Data":"c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.910263 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerStarted","Data":"f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.912818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerStarted","Data":"e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.912880 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerStarted","Data":"aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.922057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerStarted","Data":"a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925186 4727 generic.go:334] "Generic (PLEG): container finished" podID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerID="339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925382 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerDied","Data":"339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerStarted","Data":"d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.932270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerStarted","Data":"836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938398 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerID="f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerDied","Data":"f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938501 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerStarted","Data":"9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.950665 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qftd4" podStartSLOduration=1.9506386390000001 podStartE2EDuration="1.950638639s" podCreationTimestamp="2026-01-09 11:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:53.943934429 +0000 UTC m=+1259.393839210" watchObservedRunningTime="2026-01-09 11:06:53.950638639 +0000 UTC m=+1259.400543420" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.999683 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" podStartSLOduration=1.999652703 podStartE2EDuration="1.999652703s" podCreationTimestamp="2026-01-09 11:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:53.986140559 +0000 UTC m=+1259.436045340" watchObservedRunningTime="2026-01-09 11:06:53.999652703 +0000 UTC m=+1259.449557474" Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.952457 4727 generic.go:334] "Generic (PLEG): container finished" podID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerID="e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.952861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerDied","Data":"e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff"} Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.959081 4727 generic.go:334] "Generic (PLEG): container finished" podID="a403535a-35d2-487c-9fab-20360257ec11" containerID="ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.959129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerDied","Data":"ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58"} Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.971073 4727 generic.go:334] "Generic (PLEG): container finished" podID="784df696-fe59-4d64-841e-53fa77ded98f" containerID="478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.971293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerDied","Data":"478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.400844 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.523891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.523950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.525178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37d27352-2f68-4ced-a541-7bbd8bf33fb1" (UID: "37d27352-2f68-4ced-a541-7bbd8bf33fb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.531307 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv" (OuterVolumeSpecName: "kube-api-access-49twv") pod "37d27352-2f68-4ced-a541-7bbd8bf33fb1" (UID: "37d27352-2f68-4ced-a541-7bbd8bf33fb1"). InnerVolumeSpecName "kube-api-access-49twv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.605538 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.615742 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.630313 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.630356 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731446 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731623 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"b7c40808-e98b-4a31-b057-5c5b38ed5774\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"b7c40808-e98b-4a31-b057-5c5b38ed5774\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.733097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c40808-e98b-4a31-b057-5c5b38ed5774" (UID: "b7c40808-e98b-4a31-b057-5c5b38ed5774"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.733447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf2c02d0-08f3-4174-a1a1-44b6b99df774" (UID: "bf2c02d0-08f3-4174-a1a1-44b6b99df774"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.736241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx" (OuterVolumeSpecName: "kube-api-access-95zhx") pod "b7c40808-e98b-4a31-b057-5c5b38ed5774" (UID: "b7c40808-e98b-4a31-b057-5c5b38ed5774"). InnerVolumeSpecName "kube-api-access-95zhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.737883 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl" (OuterVolumeSpecName: "kube-api-access-rfjtl") pod "bf2c02d0-08f3-4174-a1a1-44b6b99df774" (UID: "bf2c02d0-08f3-4174-a1a1-44b6b99df774"). InnerVolumeSpecName "kube-api-access-rfjtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833878 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833916 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833927 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833935 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerDied","Data":"d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983302 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerDied","Data":"9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985929 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985931 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988325 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerDied","Data":"f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988427 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.635849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.644200 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.650806 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"784df696-fe59-4d64-841e-53fa77ded98f\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"21e56a97-f683-4290-b69b-ab92efd58b4c\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661565 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"a403535a-35d2-487c-9fab-20360257ec11\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661655 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"a403535a-35d2-487c-9fab-20360257ec11\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"784df696-fe59-4d64-841e-53fa77ded98f\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661756 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"21e56a97-f683-4290-b69b-ab92efd58b4c\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "784df696-fe59-4d64-841e-53fa77ded98f" (UID: "784df696-fe59-4d64-841e-53fa77ded98f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662380 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a403535a-35d2-487c-9fab-20360257ec11" (UID: "a403535a-35d2-487c-9fab-20360257ec11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21e56a97-f683-4290-b69b-ab92efd58b4c" (UID: "21e56a97-f683-4290-b69b-ab92efd58b4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662549 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.672550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv" (OuterVolumeSpecName: "kube-api-access-5mbvv") pod "21e56a97-f683-4290-b69b-ab92efd58b4c" (UID: "21e56a97-f683-4290-b69b-ab92efd58b4c"). InnerVolumeSpecName "kube-api-access-5mbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.674428 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5" (OuterVolumeSpecName: "kube-api-access-m2lt5") pod "784df696-fe59-4d64-841e-53fa77ded98f" (UID: "784df696-fe59-4d64-841e-53fa77ded98f"). InnerVolumeSpecName "kube-api-access-m2lt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.675839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl" (OuterVolumeSpecName: "kube-api-access-q64bl") pod "a403535a-35d2-487c-9fab-20360257ec11" (UID: "a403535a-35d2-487c-9fab-20360257ec11"). InnerVolumeSpecName "kube-api-access-q64bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765882 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765917 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765932 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765944 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765956 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.001873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerDied","Data":"aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.002325 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.002404 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerDied","Data":"a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005597 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005652 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerDied","Data":"836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007595 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007641 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.241424 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.242852 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.279705 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.297557 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:06:58 crc kubenswrapper[4727]: I0109 11:06:58.017616 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:06:58 crc kubenswrapper[4727]: I0109 11:06:58.017678 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.340970 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.342069 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.358639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.066358 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" exitCode=0 Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.066539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6"} Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.428576 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.486892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.486995 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487090 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487396 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488070 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488963 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488985 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.496243 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp" (OuterVolumeSpecName: "kube-api-access-v58xp") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "kube-api-access-v58xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.496254 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts" (OuterVolumeSpecName: "scripts") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.525553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591538 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591703 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591783 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.595253 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.613174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data" (OuterVolumeSpecName: "config-data") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.693817 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.694234 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933412 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933883 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933905 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933920 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933928 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933941 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933947 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933958 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933964 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933976 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933982 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933991 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933998 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934008 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934015 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934027 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934034 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934041 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934048 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934055 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934061 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934219 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934233 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934244 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934254 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934267 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934275 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934285 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934294 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934308 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934318 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.935081 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.940791 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.941288 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm4fw" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.941319 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.954372 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.999463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.999556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.000054 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.000378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081316 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"f2bd9db006208a075f1ffda298772516cf088a891a012e3732a1779dc1575402"} Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081395 4727 scope.go:117] "RemoveContainer" containerID="f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081415 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.110018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.115757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.121230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.126774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.126856 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.139373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.142182 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.148130 4727 scope.go:117] "RemoveContainer" containerID="199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.161608 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.193183 4727 scope.go:117] "RemoveContainer" containerID="e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.201591 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.201789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.206006 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.206616 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.236462 4727 scope.go:117] "RemoveContainer" containerID="9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.262434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317232 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317366 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420637 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420739 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.421518 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.425699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.430194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.432405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.437274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.443552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.451450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.538087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.790007 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:03 crc kubenswrapper[4727]: W0109 11:07:03.794780 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c213a7_1f1e_4866_aa20_019382b42f61.slice/crio-5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486 WatchSource:0}: Error finding container 5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486: Status 404 returned error can't find the container with id 5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486 Jan 09 11:07:04 crc kubenswrapper[4727]: W0109 11:07:04.031596 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66917b73_91de_4ad9_8454_f617b6d48075.slice/crio-276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7 WatchSource:0}: Error finding container 276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7: Status 404 returned error can't find the container with id 276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7 Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.040763 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.098125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerStarted","Data":"5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486"} Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.099156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7"} Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.877627 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" path="/var/lib/kubelet/pods/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b/volumes" Jan 09 11:07:05 crc kubenswrapper[4727]: I0109 11:07:05.111858 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e"} Jan 09 11:07:06 crc kubenswrapper[4727]: I0109 11:07:06.130373 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119"} Jan 09 11:07:07 crc kubenswrapper[4727]: I0109 11:07:07.142609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.216575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerStarted","Data":"e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.221531 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.222149 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.279412 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6d58k" podStartSLOduration=3.116774675 podStartE2EDuration="10.279389027s" podCreationTimestamp="2026-01-09 11:07:02 +0000 UTC" firstStartedPulling="2026-01-09 11:07:03.801087229 +0000 UTC m=+1269.250992010" lastFinishedPulling="2026-01-09 11:07:10.963701591 +0000 UTC m=+1276.413606362" observedRunningTime="2026-01-09 11:07:12.243423955 +0000 UTC m=+1277.693328736" watchObservedRunningTime="2026-01-09 11:07:12.279389027 +0000 UTC m=+1277.729293808" Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.280062 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.353823094 podStartE2EDuration="9.280056494s" podCreationTimestamp="2026-01-09 11:07:03 +0000 UTC" firstStartedPulling="2026-01-09 11:07:04.034598053 +0000 UTC m=+1269.484502834" lastFinishedPulling="2026-01-09 11:07:10.960831453 +0000 UTC m=+1276.410736234" observedRunningTime="2026-01-09 11:07:12.271228916 +0000 UTC m=+1277.721133737" watchObservedRunningTime="2026-01-09 11:07:12.280056494 +0000 UTC m=+1277.729961295" Jan 09 11:07:23 crc kubenswrapper[4727]: I0109 11:07:23.370582 4727 generic.go:334] "Generic (PLEG): container finished" podID="88c213a7-1f1e-4866-aa20-019382b42f61" containerID="e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac" exitCode=0 Jan 09 11:07:23 crc kubenswrapper[4727]: I0109 11:07:23.370691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerDied","Data":"e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac"} Jan 09 11:07:24 crc kubenswrapper[4727]: I0109 11:07:24.884156 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.046100 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.046945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.047003 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.047113 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.054417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts" (OuterVolumeSpecName: "scripts") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.055101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt" (OuterVolumeSpecName: "kube-api-access-clqnt") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "kube-api-access-clqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.078082 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data" (OuterVolumeSpecName: "config-data") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.085215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150753 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150819 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150835 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150850 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerDied","Data":"5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486"} Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402721 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.599562 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:25 crc kubenswrapper[4727]: E0109 11:07:25.600243 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.600268 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.600688 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.601724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.608469 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.612455 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm4fw" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.615882 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.877110 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.880794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.889828 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.937231 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:26 crc kubenswrapper[4727]: I0109 11:07:26.404289 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:26 crc kubenswrapper[4727]: W0109 11:07:26.408871 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aab78e7_6f64_4c9e_bb37_f670092f06eb.slice/crio-7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b WatchSource:0}: Error finding container 7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b: Status 404 returned error can't find the container with id 7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.423317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3aab78e7-6f64-4c9e-bb37-f670092f06eb","Type":"ContainerStarted","Data":"c8fc44ca2c634b15a716c734a55cc0211e84e35a36a4795cd5371387b4d5ccd5"} Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.423697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3aab78e7-6f64-4c9e-bb37-f670092f06eb","Type":"ContainerStarted","Data":"7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b"} Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.425708 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.444682 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.444659486 podStartE2EDuration="2.444659486s" podCreationTimestamp="2026-01-09 11:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:27.440019831 +0000 UTC m=+1292.889924632" watchObservedRunningTime="2026-01-09 11:07:27.444659486 +0000 UTC m=+1292.894564277" Jan 09 11:07:33 crc kubenswrapper[4727]: I0109 11:07:33.547299 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:07:35 crc kubenswrapper[4727]: I0109 11:07:35.969192 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.590393 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.592065 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.596290 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.596621 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.613963 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.805056 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.806925 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.813383 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.818940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.826495 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.830537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.831012 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.853433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.891056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.909035 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.910678 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.917957 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.922684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.937261 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.938200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.972591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.974104 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.980494 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.028871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.039343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.087581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130468 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.131025 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.139944 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.155466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.157242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.166199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.186274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.212248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.223320 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.250235 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252188 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.254564 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.258288 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.266322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.266829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.273236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.282727 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.284885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.295815 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.303601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.322866 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356758 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357123 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.459923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.459994 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460271 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.461057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.461540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.469428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.478568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.486440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.492831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.597203 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.614826 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.620463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.781649 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.782973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.786918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.787203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.821813 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.884666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.897685 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.986057 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987717 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: W0109 11:07:38.020941 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f6c5e4_6c29_48d0_a5af_819557cc9e04.slice/crio-9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442 WatchSource:0}: Error finding container 9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442: Status 404 returned error can't find the container with id 9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442 Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.027910 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.028812 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.029919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.031114 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.038701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.111484 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.246423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.374601 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.634167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"3e496057afd48fb428863c25133769c9e960876cd410faca157a7658ba5d522c"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.637876 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerStarted","Data":"9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.640932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerStarted","Data":"c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.642940 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerStarted","Data":"60bccc0ec47f588ad42cb564633edde3321617957b8b8fda8f4da812cc7b79ef"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.643985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerStarted","Data":"b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.645815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"d4d95f5c2c800a4020d7d6b3b3d3edcecb93e5aeb2770089a779d7cd1b15ec07"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.676361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:38 crc kubenswrapper[4727]: W0109 11:07:38.680983 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95f5eef_fff8_427b_9318_ebfcf188f0a9.slice/crio-426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975 WatchSource:0}: Error finding container 426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975: Status 404 returned error can't find the container with id 426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975 Jan 09 11:07:39 crc kubenswrapper[4727]: I0109 11:07:39.687872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerStarted","Data":"426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.704916 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerStarted","Data":"dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.711069 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerID="72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c" exitCode=0 Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.711147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.713596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerStarted","Data":"f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.726949 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-br2nr" podStartSLOduration=3.726926716 podStartE2EDuration="3.726926716s" podCreationTimestamp="2026-01-09 11:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:40.723865793 +0000 UTC m=+1306.173770584" watchObservedRunningTime="2026-01-09 11:07:40.726926716 +0000 UTC m=+1306.176831507" Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.773778 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bd2gt" podStartSLOduration=4.773756679 podStartE2EDuration="4.773756679s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:40.766844474 +0000 UTC m=+1306.216749275" watchObservedRunningTime="2026-01-09 11:07:40.773756679 +0000 UTC m=+1306.223661470" Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.546727 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.559959 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.733847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerStarted","Data":"e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01"} Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.734213 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.736481 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.736830 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" containerID="cri-o://aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" gracePeriod=30 Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.768808 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" podStartSLOduration=5.768775723 podStartE2EDuration="5.768775723s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:41.758207232 +0000 UTC m=+1307.208112033" watchObservedRunningTime="2026-01-09 11:07:41.768775723 +0000 UTC m=+1307.218680524" Jan 09 11:07:42 crc kubenswrapper[4727]: I0109 11:07:42.749679 4727 generic.go:334] "Generic (PLEG): container finished" podID="26965ac2-3dab-452c-8a34-83eadab4b929" containerID="aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" exitCode=2 Jan 09 11:07:42 crc kubenswrapper[4727]: I0109 11:07:42.749758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerDied","Data":"aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.198061 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.199950 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" containerID="cri-o://f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200096 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" containerID="cri-o://0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200024 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" containerID="cri-o://63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200027 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" containerID="cri-o://e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811444 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" exitCode=0 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811854 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" exitCode=2 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811864 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" exitCode=0 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811958 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.814651 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerDied","Data":"049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.814678 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.110348 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.188800 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"26965ac2-3dab-452c-8a34-83eadab4b929\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.213519 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx" (OuterVolumeSpecName: "kube-api-access-zzgpx") pod "26965ac2-3dab-452c-8a34-83eadab4b929" (UID: "26965ac2-3dab-452c-8a34-83eadab4b929"). InnerVolumeSpecName "kube-api-access-zzgpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.297597 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.825910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerStarted","Data":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.826149 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831033 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" containerID="cri-o://6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831351 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" containerID="cri-o://f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.840112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.840179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.848788 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.850791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerStarted","Data":"576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.851277 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.42299198 podStartE2EDuration="9.851258633s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.275040832 +0000 UTC m=+1303.724945613" lastFinishedPulling="2026-01-09 11:07:44.703307475 +0000 UTC m=+1310.153212266" observedRunningTime="2026-01-09 11:07:45.844299008 +0000 UTC m=+1311.294203789" watchObservedRunningTime="2026-01-09 11:07:45.851258633 +0000 UTC m=+1311.301163414" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.890241 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.220687663 podStartE2EDuration="9.890220119s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.024325764 +0000 UTC m=+1303.474230545" lastFinishedPulling="2026-01-09 11:07:44.69385822 +0000 UTC m=+1310.143763001" observedRunningTime="2026-01-09 11:07:45.86542404 +0000 UTC m=+1311.315328831" watchObservedRunningTime="2026-01-09 11:07:45.890220119 +0000 UTC m=+1311.340124900" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.908576 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.128325168 podStartE2EDuration="9.908555064s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:37.914250259 +0000 UTC m=+1303.364155040" lastFinishedPulling="2026-01-09 11:07:44.694480155 +0000 UTC m=+1310.144384936" observedRunningTime="2026-01-09 11:07:45.892894752 +0000 UTC m=+1311.342799553" watchObservedRunningTime="2026-01-09 11:07:45.908555064 +0000 UTC m=+1311.358459845" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.967545 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.307221391 podStartE2EDuration="9.967489186s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.031894595 +0000 UTC m=+1303.481799376" lastFinishedPulling="2026-01-09 11:07:44.69216239 +0000 UTC m=+1310.142067171" observedRunningTime="2026-01-09 11:07:45.946927077 +0000 UTC m=+1311.396831848" watchObservedRunningTime="2026-01-09 11:07:45.967489186 +0000 UTC m=+1311.417393967" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.991249 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.013673 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.023809 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: E0109 11:07:46.024468 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.024504 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.024799 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.025946 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.033726 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.055797 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.055988 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121475 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121547 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.122193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224205 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.238188 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.240241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.244559 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.251726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.399390 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.799555 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845697 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845845 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.847686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs" (OuterVolumeSpecName: "logs") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.854839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh" (OuterVolumeSpecName: "kube-api-access-5l5zh") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "kube-api-access-5l5zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.877629 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" exitCode=0 Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.877681 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" exitCode=143 Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.878177 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.887882 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" path="/var/lib/kubelet/pods/26965ac2-3dab-452c-8a34-83eadab4b929/volumes" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.891771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data" (OuterVolumeSpecName: "config-data") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.913076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950307 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950336 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950346 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950357 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970679 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"d4d95f5c2c800a4020d7d6b3b3d3edcecb93e5aeb2770089a779d7cd1b15ec07"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970803 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.000465 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.027367 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.027976 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028121 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} err="failed to get container status \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028228 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.028730 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028798 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} err="failed to get container status \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028828 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029137 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} err="failed to get container status \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029185 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029556 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} err="failed to get container status \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.228449 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.240350 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.252805 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252827 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.252869 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252877 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.253103 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.253135 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.254497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.257745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.257930 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.259637 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.279200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.279260 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.325025 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.325087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.367945 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368290 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368578 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.379070 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.471501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.471911 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.479081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.479891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.489405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.509068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.599836 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.612139 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.623792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.703346 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.705471 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" containerID="cri-o://8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" gracePeriod=10 Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3","Type":"ContainerStarted","Data":"a1a595cd42ec25bc504f905c30aa4de5558da8923298ed0b31ef2be0485bd6fb"} Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963172 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3","Type":"ContainerStarted","Data":"d69c62c765598f296fde9fd0b9f0147883a6b052add7173c2feb6b3857d29099"} Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.983888 4727 generic.go:334] "Generic (PLEG): container finished" podID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerID="8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" exitCode=0 Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.985312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713"} Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.047360 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.082610 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.682129419 podStartE2EDuration="3.079440811s" podCreationTimestamp="2026-01-09 11:07:45 +0000 UTC" firstStartedPulling="2026-01-09 11:07:46.97348779 +0000 UTC m=+1312.423392581" lastFinishedPulling="2026-01-09 11:07:47.370799192 +0000 UTC m=+1312.820703973" observedRunningTime="2026-01-09 11:07:47.995110637 +0000 UTC m=+1313.445015418" watchObservedRunningTime="2026-01-09 11:07:48.079440811 +0000 UTC m=+1313.529345612" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.365794 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.366252 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.366292 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.404399 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.510994 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512820 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512946 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.513158 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.513193 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.519323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr" (OuterVolumeSpecName: "kube-api-access-92mtr") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "kube-api-access-92mtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: W0109 11:07:48.562574 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f5264c_a972_499a_adff_2ee6089e9370.slice/crio-27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986 WatchSource:0}: Error finding container 27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986: Status 404 returned error can't find the container with id 27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986 Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.583486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.589539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.604040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.619628 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.620044 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.620128 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.629393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.636465 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config" (OuterVolumeSpecName: "config") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.683991 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723810 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723862 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723875 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.887406 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" path="/var/lib/kubelet/pods/6cee5e1e-cd9a-4400-ab94-66383369a072/volumes" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.013032 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" exitCode=0 Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.013094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.036815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053215 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"1fc9e9988fd4856268dac8faebd8ec23ba321d236e5bf07d0594fdfe44867d1e"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053736 4727 scope.go:117] "RemoveContainer" containerID="8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.123701 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.125991 4727 scope.go:117] "RemoveContainer" containerID="40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.135210 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.200696 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339752 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339902 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.347006 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.347216 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.351908 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts" (OuterVolumeSpecName: "scripts") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.361611 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn" (OuterVolumeSpecName: "kube-api-access-khkrn") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "kube-api-access-khkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.388966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443524 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443561 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443571 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443584 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443595 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.453732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.501227 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data" (OuterVolumeSpecName: "config-data") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.545867 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.545922 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.065716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.066125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.070905 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.070990 4727 scope.go:117] "RemoveContainer" containerID="63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.071257 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.092410 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.092367633 podStartE2EDuration="3.092367633s" podCreationTimestamp="2026-01-09 11:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:50.090064358 +0000 UTC m=+1315.539969139" watchObservedRunningTime="2026-01-09 11:07:50.092367633 +0000 UTC m=+1315.542272424" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.116808 4727 scope.go:117] "RemoveContainer" containerID="0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.148096 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.151220 4727 scope.go:117] "RemoveContainer" containerID="e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.174038 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.208948 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.213988 4727 scope.go:117] "RemoveContainer" containerID="f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226128 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226161 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226180 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226187 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226202 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226208 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226230 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226236 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226268 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226277 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226294 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="init" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226302 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="init" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226841 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226886 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226911 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226930 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.227005 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.231960 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.236147 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.236715 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.237060 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.262407 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.283158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.284681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.284718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.287363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289086 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289208 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.391486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393165 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.394403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.394746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.398919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.399650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.401802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.402248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.414673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.415425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.565875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.879388 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66917b73-91de-4ad9-8454-f617b6d48075" path="/var/lib/kubelet/pods/66917b73-91de-4ad9-8454-f617b6d48075/volumes" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.881766 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" path="/var/lib/kubelet/pods/b50668e7-e061-453a-bfcb-09cd1392aa57/volumes" Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.089279 4727 generic.go:334] "Generic (PLEG): container finished" podID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerID="f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677" exitCode=0 Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.089400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerDied","Data":"f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677"} Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.109271 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.103648 4727 generic.go:334] "Generic (PLEG): container finished" podID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerID="dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534" exitCode=0 Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.103766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerDied","Data":"dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.107661 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.107831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"a325755858225e11102c3b57ad31be80d35da46e13778310a2800ddb5d42db62"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.613244 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.613862 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.620399 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755355 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755379 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.765884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr" (OuterVolumeSpecName: "kube-api-access-h2hwr") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "kube-api-access-h2hwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.773018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts" (OuterVolumeSpecName: "scripts") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.793071 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.799674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data" (OuterVolumeSpecName: "config-data") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858393 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858451 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858468 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858481 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.119489 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247"} Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.122963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerDied","Data":"b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739"} Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.122997 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.123009 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.321893 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.322227 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" containerID="cri-o://82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.322675 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" containerID="cri-o://8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.352286 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.352528 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" containerID="cri-o://576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.440656 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.440985 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" containerID="cri-o://a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.441273 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" containerID="cri-o://d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.577403 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700622 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.709733 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts" (OuterVolumeSpecName: "scripts") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.715759 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8" (OuterVolumeSpecName: "kube-api-access-98zx8") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "kube-api-access-98zx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.753997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data" (OuterVolumeSpecName: "config-data") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.761885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806216 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806274 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806291 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806305 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.138897 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.141422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149897 4727 generic.go:334] "Generic (PLEG): container finished" podID="28f5264c-a972-499a-adff-2ee6089e9370" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" exitCode=0 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149933 4727 generic.go:334] "Generic (PLEG): container finished" podID="28f5264c-a972-499a-adff-2ee6089e9370" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" exitCode=143 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150032 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150050 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150197 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.156840 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerID="82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" exitCode=143 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.156934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerDied","Data":"426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164638 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164681 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218976 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.219017 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.219094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.224721 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.227200 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs" (OuterVolumeSpecName: "logs") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.258829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g" (OuterVolumeSpecName: "kube-api-access-vwl7g") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "kube-api-access-vwl7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.292084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.326602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327308 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327329 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327345 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327352 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327370 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327376 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327388 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327424 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327733 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327763 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327773 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327788 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.328886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334840 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334875 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334886 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.337644 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.337851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data" (OuterVolumeSpecName: "config-data") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.353587 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364007 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.364913 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364953 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} err="failed to get container status \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364985 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.367071 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.367180 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} err="failed to get container status \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.367263 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.375525 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} err="failed to get container status \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.375615 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.376068 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} err="failed to get container status \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.387630 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.436840 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.436970 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437132 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437146 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.535288 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.538897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.539010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.539122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.546140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.546183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.555594 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.568143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.575186 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.577541 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.582692 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.583396 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.610328 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643303 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.677707 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.748874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.754050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.755009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.755837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.771220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.899601 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f5264c-a972-499a-adff-2ee6089e9370" path="/var/lib/kubelet/pods/28f5264c-a972-499a-adff-2ee6089e9370/volumes" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.906504 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.181452 4727 generic.go:334] "Generic (PLEG): container finished" podID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerID="576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" exitCode=0 Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.181533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerDied","Data":"576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0"} Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.183211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerDied","Data":"9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442"} Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.183236 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.206973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.281947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.282154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.282243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.315064 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m" (OuterVolumeSpecName: "kube-api-access-2ld4m") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "kube-api-access-2ld4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.363800 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data" (OuterVolumeSpecName: "config-data") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.366686 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.378661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389883 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389933 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389982 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: W0109 11:07:55.557268 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ddc88_eab5_4564_a55d_aafb1d7084d2.slice/crio-2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91 WatchSource:0}: Error finding container 2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91: Status 404 returned error can't find the container with id 2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91 Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.559156 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.196939 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.197487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203218 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a601271-3d79-4446-bc6f-81b4490541f4","Type":"ContainerStarted","Data":"d66d8ffc67819ad65b13ef623326e2acb7c14ec7bfa47782b057c5ef182ee5db"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203289 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a601271-3d79-4446-bc6f-81b4490541f4","Type":"ContainerStarted","Data":"4d93b4626d4b85fb5061cc047d0978aa660860c3759658cc2b8ddb401094f551"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.227597 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.293639152 podStartE2EDuration="6.227578813s" podCreationTimestamp="2026-01-09 11:07:50 +0000 UTC" firstStartedPulling="2026-01-09 11:07:51.124338946 +0000 UTC m=+1316.574243727" lastFinishedPulling="2026-01-09 11:07:55.058278607 +0000 UTC m=+1320.508183388" observedRunningTime="2026-01-09 11:07:56.222604745 +0000 UTC m=+1321.672509526" watchObservedRunningTime="2026-01-09 11:07:56.227578813 +0000 UTC m=+1321.677483594" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.260876 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.270412 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.287888 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: E0109 11:07:56.288777 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.288903 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.289259 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.290150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.293925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.297004 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.296954972 podStartE2EDuration="2.296954972s" podCreationTimestamp="2026-01-09 11:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:56.276325652 +0000 UTC m=+1321.726230423" watchObservedRunningTime="2026-01-09 11:07:56.296954972 +0000 UTC m=+1321.746859753" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311382 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.383328 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.383295154 podStartE2EDuration="2.383295154s" podCreationTimestamp="2026-01-09 11:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:56.3280282 +0000 UTC m=+1321.777932981" watchObservedRunningTime="2026-01-09 11:07:56.383295154 +0000 UTC m=+1321.833199935" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.433638 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.437243 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.443724 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.462898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.642110 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.892449 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" path="/var/lib/kubelet/pods/55f6c5e4-6c29-48d0-a5af-819557cc9e04/volumes" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.226441 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerID="8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" exitCode=0 Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.227557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf"} Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.228022 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.307364 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.615019 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.672988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.675732 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.675934 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.676022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.679274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs" (OuterVolumeSpecName: "logs") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.686670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx" (OuterVolumeSpecName: "kube-api-access-r5nkx") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "kube-api-access-r5nkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.791800 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.791848 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.795721 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.797102 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data" (OuterVolumeSpecName: "config-data") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.894475 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.894545 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.255575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerStarted","Data":"8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.255980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerStarted","Data":"d1a0173db997c0ae943d3dd42cc0514969543ab4509f28fa217bff9b0acb28ed"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"3e496057afd48fb428863c25133769c9e960876cd410faca157a7658ba5d522c"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261493 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261543 4727 scope.go:117] "RemoveContainer" containerID="8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.286620 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.286584241 podStartE2EDuration="2.286584241s" podCreationTimestamp="2026-01-09 11:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:58.277593548 +0000 UTC m=+1323.727498339" watchObservedRunningTime="2026-01-09 11:07:58.286584241 +0000 UTC m=+1323.736489022" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.322724 4727 scope.go:117] "RemoveContainer" containerID="82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.323557 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.341594 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350028 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: E0109 11:07:58.350731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350761 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: E0109 11:07:58.350786 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350795 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.351033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.351061 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.352366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.356447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.379669 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.406774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.406849 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.407008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.407115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.510197 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.516654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.529857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.539000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.711365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.875238 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" path="/var/lib/kubelet/pods/2e3d825a-0b57-4562-9a27-b985dc3ddc38/volumes" Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.281292 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.912247 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.916626 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"f526c53e811d823737aee897638a2fd4e604c40040f0dc02dba42bf5050ad7d9"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.326068 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.326047625 podStartE2EDuration="2.326047625s" podCreationTimestamp="2026-01-09 11:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:00.324792765 +0000 UTC m=+1325.774697546" watchObservedRunningTime="2026-01-09 11:08:00.326047625 +0000 UTC m=+1325.775952406" Jan 09 11:08:01 crc kubenswrapper[4727]: I0109 11:08:01.643292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.719375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.907935 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.908089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:05 crc kubenswrapper[4727]: I0109 11:08:05.929769 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:05 crc kubenswrapper[4727]: I0109 11:08:05.929779 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:06 crc kubenswrapper[4727]: I0109 11:08:06.643209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:08:06 crc kubenswrapper[4727]: I0109 11:08:06.675109 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:08:07 crc kubenswrapper[4727]: I0109 11:08:07.409211 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:08:08 crc kubenswrapper[4727]: I0109 11:08:08.712363 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:08 crc kubenswrapper[4727]: I0109 11:08:08.712449 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.405323 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.405419 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.794792 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.794792 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.916311 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.918756 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.923198 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:08:15 crc kubenswrapper[4727]: I0109 11:08:15.466912 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.280946 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.403761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.404037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.404082 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.420721 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk" (OuterVolumeSpecName: "kube-api-access-cfcvk") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "kube-api-access-cfcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.440449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.447110 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data" (OuterVolumeSpecName: "config-data") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475353 4727 generic.go:334] "Generic (PLEG): container finished" podID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" exitCode=137 Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475445 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerDied","Data":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerDied","Data":"60bccc0ec47f588ad42cb564633edde3321617957b8b8fda8f4da812cc7b79ef"} Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475954 4727 scope.go:117] "RemoveContainer" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506191 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506360 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506421 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.530381 4727 scope.go:117] "RemoveContainer" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: E0109 11:08:16.531221 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": container with ID starting with 046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a not found: ID does not exist" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.531310 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} err="failed to get container status \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": rpc error: code = NotFound desc = could not find container \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": container with ID starting with 046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a not found: ID does not exist" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.550798 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.559752 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576225 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: E0109 11:08:16.576692 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576708 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576906 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.577608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.580827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.580929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.581141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.619088 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711425 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.813930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.818298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.819635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.819731 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.820201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.839685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.872849 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" path="/var/lib/kubelet/pods/f916ebd1-61eb-489a-be7d-e2cc06b152b6/volumes" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.895855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:17 crc kubenswrapper[4727]: I0109 11:08:17.408082 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:17 crc kubenswrapper[4727]: W0109 11:08:17.415419 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7275705c_d408_4eb4_af28_b9b51403b913.slice/crio-5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d WatchSource:0}: Error finding container 5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d: Status 404 returned error can't find the container with id 5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d Jan 09 11:08:17 crc kubenswrapper[4727]: I0109 11:08:17.487001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7275705c-d408-4eb4-af28-b9b51403b913","Type":"ContainerStarted","Data":"5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d"} Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.499259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7275705c-d408-4eb4-af28-b9b51403b913","Type":"ContainerStarted","Data":"ce5d1b45b36b5fa06f2ed56483b6d75519d9dc9bf45a022690ee452f1d296a91"} Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.532708 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.532680602 podStartE2EDuration="2.532680602s" podCreationTimestamp="2026-01-09 11:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:18.524977379 +0000 UTC m=+1343.974882170" watchObservedRunningTime="2026-01-09 11:08:18.532680602 +0000 UTC m=+1343.982585383" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.717063 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.717847 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.721058 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.722005 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.511271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.517874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.795066 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.804219 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.833903 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.889779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.889857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890596 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993830 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993963 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.996313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997365 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.013040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.032796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.139816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.578766 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.723457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:20 crc kubenswrapper[4727]: W0109 11:08:20.727241 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aa41a67_4a03_4479_8296_e3e0b3242cc6.slice/crio-4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96 WatchSource:0}: Error finding container 4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96: Status 404 returned error can't find the container with id 4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96 Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.533099 4727 generic.go:334] "Generic (PLEG): container finished" podID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerID="9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972" exitCode=0 Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.535396 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972"} Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.535441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerStarted","Data":"4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96"} Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.896967 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.240342 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241211 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" containerID="cri-o://d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241324 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" containerID="cri-o://85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241326 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" containerID="cri-o://b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241308 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" containerID="cri-o://b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.550911 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerStarted","Data":"5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.551164 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557393 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" exitCode=0 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557474 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" exitCode=2 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557564 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.588373 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podStartSLOduration=3.588345205 podStartE2EDuration="3.588345205s" podCreationTimestamp="2026-01-09 11:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:22.578016989 +0000 UTC m=+1348.027921770" watchObservedRunningTime="2026-01-09 11:08:22.588345205 +0000 UTC m=+1348.038249986" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.603211 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.604030 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" containerID="cri-o://b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.604210 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" containerID="cri-o://cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" gracePeriod=30 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.571480 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" exitCode=0 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.571555 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca"} Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.574595 4727 generic.go:334] "Generic (PLEG): container finished" podID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" exitCode=143 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.574666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.621150 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" exitCode=0 Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.621262 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247"} Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.952377 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023789 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023896 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024072 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.026054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.032153 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.080984 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts" (OuterVolumeSpecName: "scripts") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.081069 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8" (OuterVolumeSpecName: "kube-api-access-sh9h8") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "kube-api-access-sh9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.095165 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.122618 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126590 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126631 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126646 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126661 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126675 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126685 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.169173 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.174960 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data" (OuterVolumeSpecName: "config-data") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.208161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.250806 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.250855 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371960 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.372130 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.379052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs" (OuterVolumeSpecName: "logs") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.382995 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.405780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49" (OuterVolumeSpecName: "kube-api-access-4gn49") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "kube-api-access-4gn49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.478732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.485352 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.485391 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.501788 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data" (OuterVolumeSpecName: "config-data") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.587387 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634720 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"a325755858225e11102c3b57ad31be80d35da46e13778310a2800ddb5d42db62"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634798 4727 scope.go:117] "RemoveContainer" containerID="b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634985 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640228 4727 generic.go:334] "Generic (PLEG): container finished" podID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" exitCode=0 Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640333 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"f526c53e811d823737aee897638a2fd4e604c40040f0dc02dba42bf5050ad7d9"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640427 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.659765 4727 scope.go:117] "RemoveContainer" containerID="b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.699641 4727 scope.go:117] "RemoveContainer" containerID="85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.704849 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.721444 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.727097 4727 scope.go:117] "RemoveContainer" containerID="d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.735252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.746567 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.749681 4727 scope.go:117] "RemoveContainer" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771176 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771689 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771709 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771724 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771730 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771743 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771750 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771764 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771770 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771778 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771783 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771802 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771808 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771985 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771997 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772008 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772022 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772036 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772053 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.773247 4727 scope.go:117] "RemoveContainer" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.773950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.780691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.780922 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.781123 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.790707 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.792396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.799069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.800814 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.801831 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.802782 4727 scope.go:117] "RemoveContainer" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.804221 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": container with ID starting with b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e not found: ID does not exist" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804253 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} err="failed to get container status \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": rpc error: code = NotFound desc = could not find container \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": container with ID starting with b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e not found: ID does not exist" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804271 4727 scope.go:117] "RemoveContainer" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.804634 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": container with ID starting with cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665 not found: ID does not exist" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804656 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} err="failed to get container status \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": rpc error: code = NotFound desc = could not find container \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": container with ID starting with cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665 not found: ID does not exist" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.837271 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.849552 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.873045 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255b7479-c152-4860-8978-4a81a53287cc" path="/var/lib/kubelet/pods/255b7479-c152-4860-8978-4a81a53287cc/volumes" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.873904 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" path="/var/lib/kubelet/pods/54db797b-aa1b-4b6e-a17f-0287f920392c/volumes" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.902778 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.925365 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998004 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998083 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.000083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.000901 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.003896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.003964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.004639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.009295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.009867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.010787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.013238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.014650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.020899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.021254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.025417 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.103251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.133614 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.647550 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.650666 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.678090 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.806244 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.945361 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.959420 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.964586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.965642 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.972281 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131490 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233852 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.234029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.239159 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.240036 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.241634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.255869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.299336 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.682974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"314afaaa031c74fb8921e0263a22e087b8f7c777c96e18d6d855040c57e2fd64"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.683043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"92f749d97d787c6f55364a03df030025cfb62a1778b49399ed602f4bcf18c667"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.711312 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.711287842 podStartE2EDuration="2.711287842s" podCreationTimestamp="2026-01-09 11:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:28.701178412 +0000 UTC m=+1354.151083193" watchObservedRunningTime="2026-01-09 11:08:28.711287842 +0000 UTC m=+1354.161192623" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.970690 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.738994 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerStarted","Data":"2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.739481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerStarted","Data":"cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.756777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"0c2e19067acce7276f037db6618440ecb38f0b2632681376182d9d037b6ae398"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.812859 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wtb77" podStartSLOduration=2.812837657 podStartE2EDuration="2.812837657s" podCreationTimestamp="2026-01-09 11:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:29.774057787 +0000 UTC m=+1355.223962588" watchObservedRunningTime="2026-01-09 11:08:29.812837657 +0000 UTC m=+1355.262742428" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.150711 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.241831 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.242440 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" containerID="cri-o://e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" gracePeriod=10 Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.780122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"8d33d3481591d48d40b9b44f2b11f796c92f8ba863cf0bd3de919fb6b2ea963f"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.788898 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerID="e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" exitCode=0 Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789920 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789956 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.819490 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.918504 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.918930 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919132 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919267 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919453 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.944804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7" (OuterVolumeSpecName: "kube-api-access-mdlw7") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "kube-api-access-mdlw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.003088 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.004673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.010661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config" (OuterVolumeSpecName: "config") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022262 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022310 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022322 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022331 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.023372 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.079203 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.124414 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.124488 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.798936 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.851645 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.872817 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:32 crc kubenswrapper[4727]: I0109 11:08:32.904760 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" path="/var/lib/kubelet/pods/0ad24155-2081-4c95-b3ba-2217f670d8b4/volumes" Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.848105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"3a3c3a2e4e13e025a46effc4d82811518a3cc554f573b4967222503a57c1f202"} Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.849106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.891200 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1852602819999998 podStartE2EDuration="9.891169055s" podCreationTimestamp="2026-01-09 11:08:26 +0000 UTC" firstStartedPulling="2026-01-09 11:08:27.650326801 +0000 UTC m=+1353.100231582" lastFinishedPulling="2026-01-09 11:08:35.356235574 +0000 UTC m=+1360.806140355" observedRunningTime="2026-01-09 11:08:35.877864919 +0000 UTC m=+1361.327769700" watchObservedRunningTime="2026-01-09 11:08:35.891169055 +0000 UTC m=+1361.341073846" Jan 09 11:08:36 crc kubenswrapper[4727]: I0109 11:08:36.860494 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd540af1-9862-4759-ad16-587bbd49fea1" containerID="2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072" exitCode=0 Jan 09 11:08:36 crc kubenswrapper[4727]: I0109 11:08:36.873111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerDied","Data":"2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072"} Jan 09 11:08:37 crc kubenswrapper[4727]: I0109 11:08:37.134767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:37 crc kubenswrapper[4727]: I0109 11:08:37.135117 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.154074 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.155035 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.316639 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.389896 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.398827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb" (OuterVolumeSpecName: "kube-api-access-5f9kb") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "kube-api-access-5f9kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.407824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts" (OuterVolumeSpecName: "scripts") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.447906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data" (OuterVolumeSpecName: "config-data") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.458602 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.494853 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495252 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495347 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495413 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911074 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerDied","Data":"cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb"} Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911670 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.095922 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.096361 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" containerID="cri-o://9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.096598 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" containerID="cri-o://ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145068 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145535 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" containerID="cri-o://e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145756 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" containerID="cri-o://64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.170744 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.171032 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" containerID="cri-o://8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.404950 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.405041 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.922775 4727 generic.go:334] "Generic (PLEG): container finished" podID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerID="9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" exitCode=143 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.922886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180"} Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.925421 4727 generic.go:334] "Generic (PLEG): container finished" podID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" exitCode=143 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.925477 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.644259 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.645445 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.646019 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.646069 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:41 crc kubenswrapper[4727]: I0109 11:08:41.949971 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" exitCode=0 Jan 09 11:08:41 crc kubenswrapper[4727]: I0109 11:08:41.950054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerDied","Data":"8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.317064 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:34826->10.217.0.198:8775: read: connection reset by peer" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.317069 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:34812->10.217.0.198:8775: read: connection reset by peer" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.559566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.704786 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.705033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.705154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.713009 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl" (OuterVolumeSpecName: "kube-api-access-jnnfl") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "kube-api-access-jnnfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.741857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.750888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data" (OuterVolumeSpecName: "config-data") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811475 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811530 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811549 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.823087 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913486 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913870 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913929 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.914783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs" (OuterVolumeSpecName: "logs") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.918985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh" (OuterVolumeSpecName: "kube-api-access-2nmfh") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "kube-api-access-2nmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.945457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.967380 4727 generic.go:334] "Generic (PLEG): container finished" podID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" exitCode=0 Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.967488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.968444 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.968482 4727 scope.go:117] "RemoveContainer" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.974083 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.975612 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data" (OuterVolumeSpecName: "config-data") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.980335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerDied","Data":"d1a0173db997c0ae943d3dd42cc0514969543ab4509f28fa217bff9b0acb28ed"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.980433 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.007639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016162 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016199 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016209 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016220 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016230 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.071780 4727 scope.go:117] "RemoveContainer" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.094169 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.179892 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.183188 4727 scope.go:117] "RemoveContainer" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.184192 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": container with ID starting with 64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d not found: ID does not exist" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.184257 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} err="failed to get container status \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": rpc error: code = NotFound desc = could not find container \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": container with ID starting with 64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d not found: ID does not exist" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.184301 4727 scope.go:117] "RemoveContainer" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.184994 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": container with ID starting with e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb not found: ID does not exist" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.185040 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} err="failed to get container status \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": rpc error: code = NotFound desc = could not find container \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": container with ID starting with e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb not found: ID does not exist" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.185060 4727 scope.go:117] "RemoveContainer" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198101 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198846 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198862 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198868 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198892 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198899 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198912 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="init" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198918 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="init" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198928 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198934 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198969 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198978 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199153 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199162 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199184 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199195 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199208 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.200007 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.202162 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.213448 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.313406 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.322451 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.350355 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.352295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.355544 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.355857 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.376858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431177 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431415 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431528 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.436370 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.436561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.454000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.529242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534268 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.537683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.538880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.540054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.543301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.563325 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.687185 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.107958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.228486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:44 crc kubenswrapper[4727]: W0109 11:08:44.233467 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6024d35_671e_4814_9c13_de9897a984ee.slice/crio-fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32 WatchSource:0}: Error finding container fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32: Status 404 returned error can't find the container with id fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32 Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.876043 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" path="/var/lib/kubelet/pods/3b8ddc88-eab5-4564-a55d-aafb1d7084d2/volumes" Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.877179 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" path="/var/lib/kubelet/pods/bd5e3ba1-41fe-4ad8-997a-cae63667c74c/volumes" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.007980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"eb77879a9872318ce0bcd8eba66410cbc7a94538274be7a56f2b2430825c33c8"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.008407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"96eb51f266c155d1a08f738b33bc4ed9f8d9117193d99b3d916d10faebe405f7"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.008489 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.010425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1203f055-468b-48e1-b859-78a4d11d5034","Type":"ContainerStarted","Data":"e2d3ff1b5df6379d7a8debd96fe2ecc4093799357bb5247dff9f51e8c37fcc10"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.010473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1203f055-468b-48e1-b859-78a4d11d5034","Type":"ContainerStarted","Data":"2fc2590206384788b14d3492c5b28d63b1dd46fbffbf24870eac90278edc0e95"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013574 4727 generic.go:334] "Generic (PLEG): container finished" podID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerID="ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" exitCode=0 Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013631 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013647 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.041634 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.041613466 podStartE2EDuration="2.041613466s" podCreationTimestamp="2026-01-09 11:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:45.034347543 +0000 UTC m=+1370.484252344" watchObservedRunningTime="2026-01-09 11:08:45.041613466 +0000 UTC m=+1370.491518247" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.048064 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.055146 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.055123977 podStartE2EDuration="2.055123977s" podCreationTimestamp="2026-01-09 11:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:45.050553688 +0000 UTC m=+1370.500458469" watchObservedRunningTime="2026-01-09 11:08:45.055123977 +0000 UTC m=+1370.505028758" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071359 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071476 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.073866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs" (OuterVolumeSpecName: "logs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.076716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.076779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.077818 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.079928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74" (OuterVolumeSpecName: "kube-api-access-9cq74") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "kube-api-access-9cq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.105570 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.116033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data" (OuterVolumeSpecName: "config-data") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.136734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.153190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.180709 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.180969 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181131 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181209 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181267 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.023885 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.060041 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.071696 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.098895 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: E0109 11:08:46.099291 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: E0109 11:08:46.099337 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099345 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099545 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099576 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.100589 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106350 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106476 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106643 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.118554 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.220894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221009 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323678 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.328736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.329762 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.330153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.330193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.342787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.426177 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.873469 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" path="/var/lib/kubelet/pods/deb84a78-3539-489f-a5d0-417c0c2f1e4d/volumes" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.902782 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:47 crc kubenswrapper[4727]: I0109 11:08:47.036103 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"9297c6cd4fca3b2bb3119fc5e11df9bcc876ae93f9174e875ab5072c4e2dcaa1"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.049012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"adf2307c5f35eee090c67df22f14204ab8e5426b7fefd531becc7898a9f485c3"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.049347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"de6ca1e17c531f8d9812bdd5ae78b5648b4ab7e7f290693c06458bc8db3857df"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.085407 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.085378514 podStartE2EDuration="2.085378514s" podCreationTimestamp="2026-01-09 11:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:48.076641246 +0000 UTC m=+1373.526546047" watchObservedRunningTime="2026-01-09 11:08:48.085378514 +0000 UTC m=+1373.535283295" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.529441 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.688294 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.688459 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.530102 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.563615 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.688020 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.688081 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.135384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.700856 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6024d35-671e-4814-9c13-de9897a984ee" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.701326 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6024d35-671e-4814-9c13-de9897a984ee" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:56 crc kubenswrapper[4727]: I0109 11:08:56.426799 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:56 crc kubenswrapper[4727]: I0109 11:08:56.427476 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.115543 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.445676 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfcd192-734d-4709-b2c3-9abafc15a30e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.445714 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfcd192-734d-4709-b2c3-9abafc15a30e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.697879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.698670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.707131 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.709193 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.433313 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.434803 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.434837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.445373 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:09:07 crc kubenswrapper[4727]: I0109 11:09:07.262628 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:09:07 crc kubenswrapper[4727]: I0109 11:09:07.270865 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405114 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405612 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405673 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.406638 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.406710 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" gracePeriod=600 Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.296120 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" exitCode=0 Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.296179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.297043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.297076 4727 scope.go:117] "RemoveContainer" containerID="3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" Jan 09 11:09:15 crc kubenswrapper[4727]: I0109 11:09:15.779274 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:16 crc kubenswrapper[4727]: I0109 11:09:16.809332 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:20 crc kubenswrapper[4727]: I0109 11:09:20.883563 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" containerID="cri-o://9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" gracePeriod=604795 Jan 09 11:09:21 crc kubenswrapper[4727]: I0109 11:09:21.695078 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" containerID="cri-o://6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" gracePeriod=604796 Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.511632 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerID="9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" exitCode=0 Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.511697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633"} Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.512457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74"} Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.512475 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.584850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718288 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718487 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718564 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.719839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.720181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.724100 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.730815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.733187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.744837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96" (OuterVolumeSpecName: "kube-api-access-bfc96") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "kube-api-access-bfc96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.745376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info" (OuterVolumeSpecName: "pod-info") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.753912 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.792151 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data" (OuterVolumeSpecName: "config-data") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.819067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf" (OuterVolumeSpecName: "server-conf") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821331 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821383 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821397 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821407 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821417 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821425 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821470 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821481 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821491 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821500 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.869143 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.890906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.924088 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.924269 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.357614 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440040 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440471 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440555 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440690 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440772 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440871 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440911 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.441271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.441609 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.443202 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.444325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.450632 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.451083 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.453584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.456781 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv" (OuterVolumeSpecName: "kube-api-access-r8mrv") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "kube-api-access-r8mrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.461639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info" (OuterVolumeSpecName: "pod-info") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.485297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data" (OuterVolumeSpecName: "config-data") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523273 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" exitCode=0 Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523335 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75"} Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523468 4727 scope.go:117] "RemoveContainer" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.532820 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf" (OuterVolumeSpecName: "server-conf") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545217 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545247 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545255 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545266 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545278 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545306 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545315 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545328 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545337 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.571969 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.581382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.592934 4727 scope.go:117] "RemoveContainer" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.601340 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.615132 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.630689 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631273 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631300 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631333 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631368 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631377 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631395 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631403 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631664 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631695 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.633100 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.633607 4727 scope.go:117] "RemoveContainer" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.634220 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": container with ID starting with 6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56 not found: ID does not exist" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} err="failed to get container status \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": rpc error: code = NotFound desc = could not find container \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": container with ID starting with 6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56 not found: ID does not exist" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634282 4727 scope.go:117] "RemoveContainer" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.634812 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": container with ID starting with fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3 not found: ID does not exist" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634843 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} err="failed to get container status \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": rpc error: code = NotFound desc = could not find container \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": container with ID starting with fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3 not found: ID does not exist" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.637191 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638565 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638592 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xx2j9" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638724 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.644326 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.647845 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.647877 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.651266 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749299 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749595 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749769 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750009 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750102 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852633 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852841 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.854459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.857864 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.860767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.861272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.865307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.873211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.874074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.882614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.887214 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" path="/var/lib/kubelet/pods/e7a0dc55-5ff9-4b69-8b54-a124f04e383e/volumes" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.892718 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.906238 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.917129 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.919257 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.924471 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.924715 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925317 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925449 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925582 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j7rc6" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925716 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925931 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.926202 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.933332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.978976 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.081741 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082523 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082573 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185818 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185916 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.186072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.186141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187719 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.188271 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.188644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.193836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.194405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.195615 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.196457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.207366 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.244599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.292876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.399696 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.401531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.404407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.419310 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494431 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494560 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.495158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.558936 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:29 crc kubenswrapper[4727]: W0109 11:09:29.562398 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf1c8d7_2c22_41a5_a1fc_64e9c35bacb9.slice/crio-9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037 WatchSource:0}: Error finding container 9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037: Status 404 returned error can't find the container with id 9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037 Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598292 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599818 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599910 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.618549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.738553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.868066 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:29 crc kubenswrapper[4727]: W0109 11:09:29.869358 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49793da_9c08_47ea_892e_fe9e5b16d309.slice/crio-d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12 WatchSource:0}: Error finding container d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12: Status 404 returned error can't find the container with id d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12 Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.252855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:30 crc kubenswrapper[4727]: W0109 11:09:30.260697 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1b5611d_b2c2_4a4a_897c_7f37995529cd.slice/crio-b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed WatchSource:0}: Error finding container b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed: Status 404 returned error can't find the container with id b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.548275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.550098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerStarted","Data":"b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.551370 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.874073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" path="/var/lib/kubelet/pods/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60/volumes" Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.564897 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerID="2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7" exitCode=0 Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.565187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7"} Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.567534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7"} Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.569050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b"} Jan 09 11:09:32 crc kubenswrapper[4727]: I0109 11:09:32.585919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerStarted","Data":"390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9"} Jan 09 11:09:32 crc kubenswrapper[4727]: I0109 11:09:32.613916 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" podStartSLOduration=3.613886741 podStartE2EDuration="3.613886741s" podCreationTimestamp="2026-01-09 11:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:09:32.609593589 +0000 UTC m=+1418.059498380" watchObservedRunningTime="2026-01-09 11:09:32.613886741 +0000 UTC m=+1418.063791522" Jan 09 11:09:33 crc kubenswrapper[4727]: I0109 11:09:33.289207 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: i/o timeout" Jan 09 11:09:33 crc kubenswrapper[4727]: I0109 11:09:33.602320 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:36 crc kubenswrapper[4727]: I0109 11:09:36.813566 4727 scope.go:117] "RemoveContainer" containerID="4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0" Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.741254 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.822962 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.823457 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" containerID="cri-o://5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" gracePeriod=10 Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.033745 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.038911 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.054648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.140713 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.152721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.152798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256273 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257311 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258044 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258043 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.279835 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.365190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.690910 4727 generic.go:334] "Generic (PLEG): container finished" podID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerID="5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" exitCode=0 Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.690979 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:40.969420 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.299126 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413170 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413751 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.414010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.414577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.419884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml" (OuterVolumeSpecName: "kube-api-access-g9xml") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "kube-api-access-g9xml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.472221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.472711 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.478168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config" (OuterVolumeSpecName: "config") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.491678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.496502 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.516947 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.516983 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517000 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517076 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517095 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517109 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.703988 4727 generic.go:334] "Generic (PLEG): container finished" podID="95c81071-440f-4823-8240-dfd215cdf314" containerID="dcab4742464c8a7ea97ad83510fd8fc8fd047c920ce480909a4163ed2605b779" exitCode=0 Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.704104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerDied","Data":"dcab4742464c8a7ea97ad83510fd8fc8fd047c920ce480909a4163ed2605b779"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.704177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerStarted","Data":"4f82a93e345e376468c95125706af6ab6b5438b8bb1a6593cdada3863380e9f4"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708009 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708104 4727 scope.go:117] "RemoveContainer" containerID="5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708041 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.740564 4727 scope.go:117] "RemoveContainer" containerID="9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.760569 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.775460 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.720375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerStarted","Data":"9feb824d2893efaac9f51a8f33da94a335567db8ced1be6de7ddf9ca1287c63b"} Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.721522 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.751485 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" podStartSLOduration=3.751465365 podStartE2EDuration="3.751465365s" podCreationTimestamp="2026-01-09 11:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:09:42.742354563 +0000 UTC m=+1428.192259374" watchObservedRunningTime="2026-01-09 11:09:42.751465365 +0000 UTC m=+1428.201370176" Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.873101 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" path="/var/lib/kubelet/pods/0aa41a67-4a03-4479-8296-e3e0b3242cc6/volumes" Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.367594 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.460170 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.460661 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" containerID="cri-o://390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" gracePeriod=10 Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.822573 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerID="390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" exitCode=0 Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.822623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9"} Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.036683 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170256 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170833 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.185302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9" (OuterVolumeSpecName: "kube-api-access-dgcx9") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "kube-api-access-dgcx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.224286 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.226988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.237409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.238274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config" (OuterVolumeSpecName: "config") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.240797 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.248989 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274556 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274601 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274614 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274626 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274637 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274649 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274662 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed"} Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838355 4727 scope.go:117] "RemoveContainer" containerID="390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.874862 4727 scope.go:117] "RemoveContainer" containerID="2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.885229 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.901118 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:52 crc kubenswrapper[4727]: I0109 11:09:52.884811 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" path="/var/lib/kubelet/pods/c1b5611d-b2c2-4a4a-897c-7f37995529cd/volumes" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.566102 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567880 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567907 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567939 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567951 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567982 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567995 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.568035 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568047 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568384 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568405 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.569556 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.574947 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.575768 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.576302 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.576958 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.581273 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689917 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.690310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.803808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.805437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.805453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.821250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.901862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.993990 4727 generic.go:334] "Generic (PLEG): container finished" podID="a49793da-9c08-47ea-892e-fe9e5b16d309" containerID="1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7" exitCode=0 Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.994084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerDied","Data":"1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7"} Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.005695 4727 generic.go:334] "Generic (PLEG): container finished" podID="bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9" containerID="6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b" exitCode=0 Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.005742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerDied","Data":"6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b"} Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.514624 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:04 crc kubenswrapper[4727]: W0109 11:10:04.527137 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9bcc7e6_29a0_4902_a4be_2ea8e0a1f1a1.slice/crio-6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7 WatchSource:0}: Error finding container 6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7: Status 404 returned error can't find the container with id 6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7 Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.018801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"606cf56153fe3380f0b1856793e7fdcc53f5f1215d67f935b5ae6b7ee10f0076"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.019753 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.021977 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerStarted","Data":"6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.024175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"00dae8f5c467ff57ac54c923d0e2b2416daf17f9c3978b1a4385201660a138b9"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.035952 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.074177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.074156299 podStartE2EDuration="37.074156299s" podCreationTimestamp="2026-01-09 11:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:10:05.067869699 +0000 UTC m=+1450.517774490" watchObservedRunningTime="2026-01-09 11:10:05.074156299 +0000 UTC m=+1450.524061080" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.098688 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.098667275 podStartE2EDuration="37.098667275s" podCreationTimestamp="2026-01-09 11:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:10:05.095031741 +0000 UTC m=+1450.544936532" watchObservedRunningTime="2026-01-09 11:10:05.098667275 +0000 UTC m=+1450.548572056" Jan 09 11:10:16 crc kubenswrapper[4727]: I0109 11:10:16.184328 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerStarted","Data":"1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d"} Jan 09 11:10:16 crc kubenswrapper[4727]: I0109 11:10:16.208249 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" podStartSLOduration=1.975981945 podStartE2EDuration="13.208225745s" podCreationTimestamp="2026-01-09 11:10:03 +0000 UTC" firstStartedPulling="2026-01-09 11:10:04.530118395 +0000 UTC m=+1449.980023166" lastFinishedPulling="2026-01-09 11:10:15.762362195 +0000 UTC m=+1461.212266966" observedRunningTime="2026-01-09 11:10:16.203154605 +0000 UTC m=+1461.653059406" watchObservedRunningTime="2026-01-09 11:10:16.208225745 +0000 UTC m=+1461.658130536" Jan 09 11:10:18 crc kubenswrapper[4727]: I0109 11:10:18.983767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 11:10:19 crc kubenswrapper[4727]: I0109 11:10:19.297870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:10:28 crc kubenswrapper[4727]: I0109 11:10:28.309191 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerID="1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d" exitCode=0 Jan 09 11:10:28 crc kubenswrapper[4727]: I0109 11:10:28.309314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerDied","Data":"1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d"} Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.761454 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.827493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.827979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.828154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.828202 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.834952 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v" (OuterVolumeSpecName: "kube-api-access-kxk9v") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "kube-api-access-kxk9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.835446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.860568 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.861846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory" (OuterVolumeSpecName: "inventory") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931080 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931125 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931137 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931148 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerDied","Data":"6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7"} Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332582 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332644 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.430379 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:30 crc kubenswrapper[4727]: E0109 11:10:30.431038 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.431287 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.431608 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.432962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.436321 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.436475 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.437719 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.437924 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441154 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.444683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.548976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.553033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.563156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.770139 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:31 crc kubenswrapper[4727]: I0109 11:10:31.333359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.359984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerStarted","Data":"83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a"} Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.360560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerStarted","Data":"d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911"} Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.381314 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" podStartSLOduration=1.845153318 podStartE2EDuration="2.381287691s" podCreationTimestamp="2026-01-09 11:10:30 +0000 UTC" firstStartedPulling="2026-01-09 11:10:31.357086072 +0000 UTC m=+1476.806990853" lastFinishedPulling="2026-01-09 11:10:31.893220395 +0000 UTC m=+1477.343125226" observedRunningTime="2026-01-09 11:10:32.378293424 +0000 UTC m=+1477.828198225" watchObservedRunningTime="2026-01-09 11:10:32.381287691 +0000 UTC m=+1477.831192492" Jan 09 11:10:35 crc kubenswrapper[4727]: I0109 11:10:35.396146 4727 generic.go:334] "Generic (PLEG): container finished" podID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerID="83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a" exitCode=0 Jan 09 11:10:35 crc kubenswrapper[4727]: I0109 11:10:35.396288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerDied","Data":"83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a"} Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.822453 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.916937 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt" (OuterVolumeSpecName: "kube-api-access-m9smt") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "kube-api-access-m9smt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.940404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.940808 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory" (OuterVolumeSpecName: "inventory") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.975962 4727 scope.go:117] "RemoveContainer" containerID="5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012009 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012048 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012063 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.028686 4727 scope.go:117] "RemoveContainer" containerID="508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.085431 4727 scope.go:117] "RemoveContainer" containerID="aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.111670 4727 scope.go:117] "RemoveContainer" containerID="9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.137899 4727 scope.go:117] "RemoveContainer" containerID="978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerDied","Data":"d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911"} Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425708 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425176 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.496744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:37 crc kubenswrapper[4727]: E0109 11:10:37.497147 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.497167 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.497388 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.498083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.500745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.500808 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.501171 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.503927 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.516655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.625379 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.625757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.626752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.626905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.729846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.736181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.736181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.737285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.761050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.873251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:38 crc kubenswrapper[4727]: I0109 11:10:38.442979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.446872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerStarted","Data":"422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e"} Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.447699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerStarted","Data":"e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f"} Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.471643 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" podStartSLOduration=2.017110885 podStartE2EDuration="2.471616745s" podCreationTimestamp="2026-01-09 11:10:37 +0000 UTC" firstStartedPulling="2026-01-09 11:10:38.451580442 +0000 UTC m=+1483.901485233" lastFinishedPulling="2026-01-09 11:10:38.906086302 +0000 UTC m=+1484.355991093" observedRunningTime="2026-01-09 11:10:39.462723917 +0000 UTC m=+1484.912628708" watchObservedRunningTime="2026-01-09 11:10:39.471616745 +0000 UTC m=+1484.921521526" Jan 09 11:11:09 crc kubenswrapper[4727]: I0109 11:11:09.405737 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:11:09 crc kubenswrapper[4727]: I0109 11:11:09.406729 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:11:37 crc kubenswrapper[4727]: I0109 11:11:37.318264 4727 scope.go:117] "RemoveContainer" containerID="afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd" Jan 09 11:11:39 crc kubenswrapper[4727]: I0109 11:11:39.405385 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:11:39 crc kubenswrapper[4727]: I0109 11:11:39.405968 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.268235 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.271693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.278011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.315549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.333956 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.335034 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437461 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.438229 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.438503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.463033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.631608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:47 crc kubenswrapper[4727]: I0109 11:11:47.208427 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.206456 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" exitCode=0 Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.206570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b"} Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.207739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"c6761630a27b118fa7a1b8ffc3af0856cbb08e875418e32660c39fd050836633"} Jan 09 11:11:49 crc kubenswrapper[4727]: I0109 11:11:49.220272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} Jan 09 11:11:50 crc kubenswrapper[4727]: I0109 11:11:50.235412 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" exitCode=0 Jan 09 11:11:50 crc kubenswrapper[4727]: I0109 11:11:50.235553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} Jan 09 11:11:51 crc kubenswrapper[4727]: I0109 11:11:51.248690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} Jan 09 11:11:51 crc kubenswrapper[4727]: I0109 11:11:51.272837 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mnlpw" podStartSLOduration=2.810798393 podStartE2EDuration="5.272794295s" podCreationTimestamp="2026-01-09 11:11:46 +0000 UTC" firstStartedPulling="2026-01-09 11:11:48.209265423 +0000 UTC m=+1553.659170204" lastFinishedPulling="2026-01-09 11:11:50.671261325 +0000 UTC m=+1556.121166106" observedRunningTime="2026-01-09 11:11:51.269689796 +0000 UTC m=+1556.719594587" watchObservedRunningTime="2026-01-09 11:11:51.272794295 +0000 UTC m=+1556.722699076" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.632195 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.633158 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.684866 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:57 crc kubenswrapper[4727]: I0109 11:11:57.367599 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:57 crc kubenswrapper[4727]: I0109 11:11:57.421687 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.335830 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mnlpw" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" containerID="cri-o://5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" gracePeriod=2 Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.839209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970053 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970900 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities" (OuterVolumeSpecName: "utilities") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.971050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.972204 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.977178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg" (OuterVolumeSpecName: "kube-api-access-pf6tg") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "kube-api-access-pf6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.074724 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351473 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" exitCode=0 Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351542 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"c6761630a27b118fa7a1b8ffc3af0856cbb08e875418e32660c39fd050836633"} Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351630 4727 scope.go:117] "RemoveContainer" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351563 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.375525 4727 scope.go:117] "RemoveContainer" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.397368 4727 scope.go:117] "RemoveContainer" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.423464 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.440925 4727 scope.go:117] "RemoveContainer" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.441404 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": container with ID starting with 5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2 not found: ID does not exist" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441440 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} err="failed to get container status \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": rpc error: code = NotFound desc = could not find container \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": container with ID starting with 5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2 not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441463 4727 scope.go:117] "RemoveContainer" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.441962 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": container with ID starting with a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d not found: ID does not exist" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441986 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} err="failed to get container status \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": rpc error: code = NotFound desc = could not find container \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": container with ID starting with a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441998 4727 scope.go:117] "RemoveContainer" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.442433 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": container with ID starting with 7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b not found: ID does not exist" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.442460 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b"} err="failed to get container status \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": rpc error: code = NotFound desc = could not find container \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": container with ID starting with 7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.484117 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.699108 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.708725 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.875988 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" path="/var/lib/kubelet/pods/1235df16-02a9-4ac7-b8e2-d3411d65c5cd/volumes" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.755658 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757356 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-utilities" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757379 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-utilities" Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757429 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757459 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-content" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757465 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-content" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757920 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.759880 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.767798 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.826691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.826927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.827410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.953836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:07 crc kubenswrapper[4727]: I0109 11:12:07.088462 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:07 crc kubenswrapper[4727]: I0109 11:12:07.652167 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430571 4727 generic.go:334] "Generic (PLEG): container finished" podID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" containerID="2caca0541fe47929e16217e797d21ae7809a50fd1a6f0f5f9a4e867fd53bbaad" exitCode=0 Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerDied","Data":"2caca0541fe47929e16217e797d21ae7809a50fd1a6f0f5f9a4e867fd53bbaad"} Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430884 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerStarted","Data":"4a38c7f026728d8816fe27304b7755fb62283693bf4673f19989f176ce1efc58"} Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404499 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404572 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404613 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.405281 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.405327 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" gracePeriod=600 Jan 09 11:12:10 crc kubenswrapper[4727]: E0109 11:12:10.061962 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473126 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" exitCode=0 Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473248 4727 scope.go:117] "RemoveContainer" containerID="02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.474113 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:10 crc kubenswrapper[4727]: E0109 11:12:10.474522 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.005066 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.005936 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swsrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fbk2g_openshift-marketplace(5045256f-167a-4bdd-b1dc-3b052bbdfeb6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.007146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fbk2g" podUID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.606466 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fbk2g" podUID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" Jan 09 11:12:23 crc kubenswrapper[4727]: I0109 11:12:23.860079 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:23 crc kubenswrapper[4727]: E0109 11:12:23.860874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:34 crc kubenswrapper[4727]: I0109 11:12:34.885256 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:34 crc kubenswrapper[4727]: E0109 11:12:34.886613 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:35 crc kubenswrapper[4727]: I0109 11:12:35.755625 4727 generic.go:334] "Generic (PLEG): container finished" podID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" containerID="2b73d9986017db6134722c854a83c36d6db3cb027749a3a9499c889eb762b36a" exitCode=0 Jan 09 11:12:35 crc kubenswrapper[4727]: I0109 11:12:35.755681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerDied","Data":"2b73d9986017db6134722c854a83c36d6db3cb027749a3a9499c889eb762b36a"} Jan 09 11:12:37 crc kubenswrapper[4727]: I0109 11:12:37.778005 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerStarted","Data":"e16ebeb855e655dfd97d784948801190654cdf0593bf3358f79163637b067f1d"} Jan 09 11:12:37 crc kubenswrapper[4727]: I0109 11:12:37.813693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fbk2g" podStartSLOduration=3.791063741 podStartE2EDuration="31.813655846s" podCreationTimestamp="2026-01-09 11:12:06 +0000 UTC" firstStartedPulling="2026-01-09 11:12:08.433070048 +0000 UTC m=+1573.882974829" lastFinishedPulling="2026-01-09 11:12:36.455662163 +0000 UTC m=+1601.905566934" observedRunningTime="2026-01-09 11:12:37.802256562 +0000 UTC m=+1603.252161343" watchObservedRunningTime="2026-01-09 11:12:37.813655846 +0000 UTC m=+1603.263560627" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.088743 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.089359 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.162410 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.977615 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.063739 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.139252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.139635 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9rsdw" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" containerID="cri-o://1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" gracePeriod=2 Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.619289 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792386 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.793095 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities" (OuterVolumeSpecName: "utilities") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.800610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st" (OuterVolumeSpecName: "kube-api-access-z79st") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "kube-api-access-z79st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.840715 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.861977 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:48 crc kubenswrapper[4727]: E0109 11:12:48.862325 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894864 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894889 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894899 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939363 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" exitCode=0 Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939542 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"357891722b37e84c5d6696b58f957606ce91311ffc64133377aa8cf62644c51c"} Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939569 4727 scope.go:117] "RemoveContainer" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939877 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.965497 4727 scope.go:117] "RemoveContainer" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.975622 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.985817 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.020712 4727 scope.go:117] "RemoveContainer" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.053653 4727 scope.go:117] "RemoveContainer" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.054960 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": container with ID starting with 1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49 not found: ID does not exist" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055031 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} err="failed to get container status \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": rpc error: code = NotFound desc = could not find container \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": container with ID starting with 1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49 not found: ID does not exist" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055079 4727 scope.go:117] "RemoveContainer" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.055848 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": container with ID starting with c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f not found: ID does not exist" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055883 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f"} err="failed to get container status \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": rpc error: code = NotFound desc = could not find container \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": container with ID starting with c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f not found: ID does not exist" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055903 4727 scope.go:117] "RemoveContainer" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.056286 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": container with ID starting with 45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc not found: ID does not exist" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.056331 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc"} err="failed to get container status \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": rpc error: code = NotFound desc = could not find container \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": container with ID starting with 45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc not found: ID does not exist" Jan 09 11:12:50 crc kubenswrapper[4727]: I0109 11:12:50.874368 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" path="/var/lib/kubelet/pods/9f453764-5e7d-441d-90d0-c96ae96597ef/volumes" Jan 09 11:13:02 crc kubenswrapper[4727]: I0109 11:13:02.860188 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:02 crc kubenswrapper[4727]: E0109 11:13:02.860989 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:13 crc kubenswrapper[4727]: I0109 11:13:13.860587 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:13 crc kubenswrapper[4727]: E0109 11:13:13.861585 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:24 crc kubenswrapper[4727]: I0109 11:13:24.871159 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:24 crc kubenswrapper[4727]: E0109 11:13:24.872651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:37 crc kubenswrapper[4727]: I0109 11:13:37.860595 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:37 crc kubenswrapper[4727]: E0109 11:13:37.863253 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:50 crc kubenswrapper[4727]: I0109 11:13:50.861031 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:50 crc kubenswrapper[4727]: E0109 11:13:50.862207 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:01 crc kubenswrapper[4727]: I0109 11:14:01.732938 4727 generic.go:334] "Generic (PLEG): container finished" podID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerID="422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e" exitCode=0 Jan 09 11:14:01 crc kubenswrapper[4727]: I0109 11:14:01.733469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerDied","Data":"422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e"} Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.188602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280723 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.281177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.294807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.294831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm" (OuterVolumeSpecName: "kube-api-access-9bvmm") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "kube-api-access-9bvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.317275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.320058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory" (OuterVolumeSpecName: "inventory") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384569 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384613 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384626 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384637 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerDied","Data":"e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f"} Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764304 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764371 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.899819 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900450 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900476 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900491 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-content" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900498 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-content" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900530 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-utilities" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900540 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-utilities" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900563 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900973 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.901008 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.902006 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905181 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905317 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905184 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.908645 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.920752 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.996808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.997008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.997036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.048501 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.058370 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.069730 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.080068 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100293 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100457 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.104284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.105440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.123934 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.222620 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.809235 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.816928 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.873406 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:04 crc kubenswrapper[4727]: E0109 11:14:04.874146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.883726 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" path="/var/lib/kubelet/pods/b3fe1de7-6846-464a-8c23-b5cbc944ffaf/volumes" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.885108 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" path="/var/lib/kubelet/pods/c54e2e39-4fb7-4ccb-98e4-437653bcc01c/volumes" Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.033734 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.045312 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.055835 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.074320 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.790427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerStarted","Data":"57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4"} Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.804134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerStarted","Data":"5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06"} Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.841136 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" podStartSLOduration=2.905406861 podStartE2EDuration="3.841107549s" podCreationTimestamp="2026-01-09 11:14:03 +0000 UTC" firstStartedPulling="2026-01-09 11:14:04.816534152 +0000 UTC m=+1690.266438933" lastFinishedPulling="2026-01-09 11:14:05.75223483 +0000 UTC m=+1691.202139621" observedRunningTime="2026-01-09 11:14:06.825280651 +0000 UTC m=+1692.275185452" watchObservedRunningTime="2026-01-09 11:14:06.841107549 +0000 UTC m=+1692.291012350" Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.885445 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" path="/var/lib/kubelet/pods/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9/volumes" Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.886248 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" path="/var/lib/kubelet/pods/b5dba580-00b4-4bed-a734-78ac96b5cd4d/volumes" Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.057416 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.077163 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.873944 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" path="/var/lib/kubelet/pods/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b/volumes" Jan 09 11:14:11 crc kubenswrapper[4727]: I0109 11:14:11.041344 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:14:11 crc kubenswrapper[4727]: I0109 11:14:11.051644 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.043551 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.053465 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.872250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" path="/var/lib/kubelet/pods/14fbdc64-2108-41db-88bd-d978e9ce6550/volumes" Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.873111 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" path="/var/lib/kubelet/pods/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8/volumes" Jan 09 11:14:15 crc kubenswrapper[4727]: I0109 11:14:15.862599 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:15 crc kubenswrapper[4727]: E0109 11:14:15.863481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.056879 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.067565 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.077642 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.087762 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.096984 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.105177 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.861186 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:30 crc kubenswrapper[4727]: E0109 11:14:30.861584 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.873956 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" path="/var/lib/kubelet/pods/108eb21f-902c-4942-8be4-9a3b11146c25/volumes" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.874858 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" path="/var/lib/kubelet/pods/46480603-3f1d-4589-ba8e-9026edee07c7/volumes" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.875485 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" path="/var/lib/kubelet/pods/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb/volumes" Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.053571 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.066098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.075802 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.084203 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.092243 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.100481 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.872973 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" path="/var/lib/kubelet/pods/22d06cd8-5172-4755-93f0-6c6aa036bed8/volumes" Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.874073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" path="/var/lib/kubelet/pods/4ad382ed-924d-4c03-88b2-63d89690a56a/volumes" Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.874708 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" path="/var/lib/kubelet/pods/c1b70879-a5de-4ea1-9db1-82d9f0416a71/volumes" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.489949 4727 scope.go:117] "RemoveContainer" containerID="576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.515633 4727 scope.go:117] "RemoveContainer" containerID="72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.550917 4727 scope.go:117] "RemoveContainer" containerID="ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.590420 4727 scope.go:117] "RemoveContainer" containerID="e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.615297 4727 scope.go:117] "RemoveContainer" containerID="5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.639198 4727 scope.go:117] "RemoveContainer" containerID="fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.686621 4727 scope.go:117] "RemoveContainer" containerID="d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.732545 4727 scope.go:117] "RemoveContainer" containerID="dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.792348 4727 scope.go:117] "RemoveContainer" containerID="4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.837975 4727 scope.go:117] "RemoveContainer" containerID="a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.872470 4727 scope.go:117] "RemoveContainer" containerID="d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.895750 4727 scope.go:117] "RemoveContainer" containerID="8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.923399 4727 scope.go:117] "RemoveContainer" containerID="958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.950428 4727 scope.go:117] "RemoveContainer" containerID="9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.979881 4727 scope.go:117] "RemoveContainer" containerID="538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.007265 4727 scope.go:117] "RemoveContainer" containerID="29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.034922 4727 scope.go:117] "RemoveContainer" containerID="00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.058087 4727 scope.go:117] "RemoveContainer" containerID="1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9" Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.035778 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.051430 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.861001 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:43 crc kubenswrapper[4727]: E0109 11:14:43.861319 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:44 crc kubenswrapper[4727]: I0109 11:14:44.870805 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64657563-7e2f-46ef-a906-37e42398662a" path="/var/lib/kubelet/pods/64657563-7e2f-46ef-a906-37e42398662a/volumes" Jan 09 11:14:55 crc kubenswrapper[4727]: I0109 11:14:55.860329 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:55 crc kubenswrapper[4727]: E0109 11:14:55.863107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:57 crc kubenswrapper[4727]: I0109 11:14:57.042401 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:14:57 crc kubenswrapper[4727]: I0109 11:14:57.054922 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:14:58 crc kubenswrapper[4727]: I0109 11:14:58.873114 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5667805-aff5-4227-88df-2d2440259e9b" path="/var/lib/kubelet/pods/e5667805-aff5-4227-88df-2d2440259e9b/volumes" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.154702 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.156877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.159757 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.161764 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.191016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351586 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.353090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.361876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.376883 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.491624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.984086 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.476016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerStarted","Data":"84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709"} Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.476085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerStarted","Data":"a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df"} Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.510556 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" podStartSLOduration=1.510529346 podStartE2EDuration="1.510529346s" podCreationTimestamp="2026-01-09 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:15:01.506315265 +0000 UTC m=+1746.956220036" watchObservedRunningTime="2026-01-09 11:15:01.510529346 +0000 UTC m=+1746.960434127" Jan 09 11:15:02 crc kubenswrapper[4727]: I0109 11:15:02.487886 4727 generic.go:334] "Generic (PLEG): container finished" podID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerID="84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709" exitCode=0 Jan 09 11:15:02 crc kubenswrapper[4727]: I0109 11:15:02.487976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerDied","Data":"84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709"} Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.854072 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.949201 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.949678 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.955371 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.957004 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k" (OuterVolumeSpecName: "kube-api-access-djt7k") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "kube-api-access-djt7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.052418 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.052477 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.513723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerDied","Data":"a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df"} Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.514159 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.513859 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:07 crc kubenswrapper[4727]: I0109 11:15:07.860380 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:07 crc kubenswrapper[4727]: E0109 11:15:07.861268 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:22 crc kubenswrapper[4727]: I0109 11:15:22.861634 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:22 crc kubenswrapper[4727]: E0109 11:15:22.862848 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.045098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.054108 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.860295 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:35 crc kubenswrapper[4727]: E0109 11:15:35.861177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:36 crc kubenswrapper[4727]: I0109 11:15:36.873413 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" path="/var/lib/kubelet/pods/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1/volumes" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.378669 4727 scope.go:117] "RemoveContainer" containerID="6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.420028 4727 scope.go:117] "RemoveContainer" containerID="9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.500596 4727 scope.go:117] "RemoveContainer" containerID="61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e" Jan 09 11:15:42 crc kubenswrapper[4727]: I0109 11:15:42.908736 4727 generic.go:334] "Generic (PLEG): container finished" podID="79cfc519-9725-4957-b42c-d262651895a3" containerID="5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06" exitCode=0 Jan 09 11:15:42 crc kubenswrapper[4727]: I0109 11:15:42.908843 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerDied","Data":"5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06"} Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.428777 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.637121 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg" (OuterVolumeSpecName: "kube-api-access-l9qkg") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "kube-api-access-l9qkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.667297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory" (OuterVolumeSpecName: "inventory") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.670215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.731503 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.731992 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.732007 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerDied","Data":"57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4"} Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944090 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944175 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.028976 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:45 crc kubenswrapper[4727]: E0109 11:15:45.029671 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029693 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: E0109 11:15:45.029708 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029735 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029973 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029994 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.030942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.033821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034133 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034140 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040756 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.041440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142347 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.148118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.164947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.165545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.359126 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.946159 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:46 crc kubenswrapper[4727]: I0109 11:15:46.975336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerStarted","Data":"91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686"} Jan 09 11:15:46 crc kubenswrapper[4727]: I0109 11:15:46.976846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerStarted","Data":"a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6"} Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.044326 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" podStartSLOduration=2.336935045 podStartE2EDuration="3.044302801s" podCreationTimestamp="2026-01-09 11:15:45 +0000 UTC" firstStartedPulling="2026-01-09 11:15:45.964658732 +0000 UTC m=+1791.414563513" lastFinishedPulling="2026-01-09 11:15:46.672026498 +0000 UTC m=+1792.121931269" observedRunningTime="2026-01-09 11:15:47.002169546 +0000 UTC m=+1792.452074327" watchObservedRunningTime="2026-01-09 11:15:48.044302801 +0000 UTC m=+1793.494207592" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.053536 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.064526 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.076179 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.087219 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.096363 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.103804 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.860663 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:48 crc kubenswrapper[4727]: E0109 11:15:48.860994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.875025 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" path="/var/lib/kubelet/pods/695f5777-ca94-4fee-9620-b22eb2a2d9ab/volumes" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.876162 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790d27d6-9817-413b-b711-f0be91104704" path="/var/lib/kubelet/pods/790d27d6-9817-413b-b711-f0be91104704/volumes" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.876820 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" path="/var/lib/kubelet/pods/a52e2c52-54f3-4f0d-9244-1ce7563deb78/volumes" Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.037845 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.049557 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.873453 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" path="/var/lib/kubelet/pods/5f7de868-87b0-49c7-ad5e-7c528f181550/volumes" Jan 09 11:16:03 crc kubenswrapper[4727]: I0109 11:16:03.861275 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:03 crc kubenswrapper[4727]: E0109 11:16:03.862848 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:18 crc kubenswrapper[4727]: I0109 11:16:18.860219 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:18 crc kubenswrapper[4727]: E0109 11:16:18.861627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:33 crc kubenswrapper[4727]: I0109 11:16:33.859761 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:33 crc kubenswrapper[4727]: E0109 11:16:33.861206 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.618363 4727 scope.go:117] "RemoveContainer" containerID="8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.669409 4727 scope.go:117] "RemoveContainer" containerID="84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.728022 4727 scope.go:117] "RemoveContainer" containerID="3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.777233 4727 scope.go:117] "RemoveContainer" containerID="8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617" Jan 09 11:16:46 crc kubenswrapper[4727]: I0109 11:16:46.861815 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:46 crc kubenswrapper[4727]: E0109 11:16:46.862956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.052760 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.060118 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.068983 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.076895 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.084688 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.093331 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.877198 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" path="/var/lib/kubelet/pods/37d27352-2f68-4ced-a541-7bbd8bf33fb1/volumes" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.878656 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" path="/var/lib/kubelet/pods/b7c40808-e98b-4a31-b057-5c5b38ed5774/volumes" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.881340 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" path="/var/lib/kubelet/pods/bf2c02d0-08f3-4174-a1a1-44b6b99df774/volumes" Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.034918 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.044158 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.053432 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.066640 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.076442 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.087271 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.483860 4727 generic.go:334] "Generic (PLEG): container finished" podID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerID="91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686" exitCode=0 Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.483950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerDied","Data":"91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686"} Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.859974 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:57 crc kubenswrapper[4727]: E0109 11:16:57.860361 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.877903 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" path="/var/lib/kubelet/pods/21e56a97-f683-4290-b69b-ab92efd58b4c/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.879150 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784df696-fe59-4d64-841e-53fa77ded98f" path="/var/lib/kubelet/pods/784df696-fe59-4d64-841e-53fa77ded98f/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.879881 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a403535a-35d2-487c-9fab-20360257ec11" path="/var/lib/kubelet/pods/a403535a-35d2-487c-9fab-20360257ec11/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.962926 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980999 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.998938 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b" (OuterVolumeSpecName: "kube-api-access-lkp2b") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "kube-api-access-lkp2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.020755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory" (OuterVolumeSpecName: "inventory") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.020824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087693 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087746 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087764 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.507990 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerDied","Data":"a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6"} Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.508056 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.508107 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.618193 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:16:59 crc kubenswrapper[4727]: E0109 11:16:59.618942 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.618965 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.619245 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.620301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.623381 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.623796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.624055 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.625479 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.633619 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.819540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.819909 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.829891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.985732 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:00 crc kubenswrapper[4727]: I0109 11:17:00.552914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.531021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerStarted","Data":"51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc"} Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.532108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerStarted","Data":"79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc"} Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.563836 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" podStartSLOduration=1.866388975 podStartE2EDuration="2.563806629s" podCreationTimestamp="2026-01-09 11:16:59 +0000 UTC" firstStartedPulling="2026-01-09 11:17:00.55869291 +0000 UTC m=+1866.008597691" lastFinishedPulling="2026-01-09 11:17:01.256110564 +0000 UTC m=+1866.706015345" observedRunningTime="2026-01-09 11:17:01.550531068 +0000 UTC m=+1867.000435859" watchObservedRunningTime="2026-01-09 11:17:01.563806629 +0000 UTC m=+1867.013711480" Jan 09 11:17:06 crc kubenswrapper[4727]: I0109 11:17:06.582227 4727 generic.go:334] "Generic (PLEG): container finished" podID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerID="51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc" exitCode=0 Jan 09 11:17:06 crc kubenswrapper[4727]: I0109 11:17:06.582324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerDied","Data":"51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc"} Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.047114 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127452 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.137114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v" (OuterVolumeSpecName: "kube-api-access-47d7v") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "kube-api-access-47d7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.160758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory" (OuterVolumeSpecName: "inventory") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.163333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230677 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230749 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230765 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerDied","Data":"79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc"} Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605128 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605166 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.714110 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:08 crc kubenswrapper[4727]: E0109 11:17:08.715169 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.715194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.715708 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.717004 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.725911 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726041 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726224 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726379 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726775 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.745823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.746114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.746203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848279 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.853742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.855297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.871446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:09 crc kubenswrapper[4727]: I0109 11:17:09.060598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:09 crc kubenswrapper[4727]: I0109 11:17:09.683365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.641503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerStarted","Data":"8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62"} Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.642104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerStarted","Data":"62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9"} Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.670678 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" podStartSLOduration=2.146475433 podStartE2EDuration="2.670648918s" podCreationTimestamp="2026-01-09 11:17:08 +0000 UTC" firstStartedPulling="2026-01-09 11:17:09.687896098 +0000 UTC m=+1875.137800879" lastFinishedPulling="2026-01-09 11:17:10.212069583 +0000 UTC m=+1875.661974364" observedRunningTime="2026-01-09 11:17:10.661948629 +0000 UTC m=+1876.111853410" watchObservedRunningTime="2026-01-09 11:17:10.670648918 +0000 UTC m=+1876.120553699" Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.861056 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:17:11 crc kubenswrapper[4727]: I0109 11:17:11.686853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} Jan 09 11:17:25 crc kubenswrapper[4727]: I0109 11:17:25.050225 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:17:25 crc kubenswrapper[4727]: I0109 11:17:25.060097 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:17:26 crc kubenswrapper[4727]: I0109 11:17:26.880878 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" path="/var/lib/kubelet/pods/88c213a7-1f1e-4866-aa20-019382b42f61/volumes" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.891189 4727 scope.go:117] "RemoveContainer" containerID="f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.926206 4727 scope.go:117] "RemoveContainer" containerID="478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.981801 4727 scope.go:117] "RemoveContainer" containerID="e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.048641 4727 scope.go:117] "RemoveContainer" containerID="ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.075465 4727 scope.go:117] "RemoveContainer" containerID="e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.126108 4727 scope.go:117] "RemoveContainer" containerID="339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.175015 4727 scope.go:117] "RemoveContainer" containerID="c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c" Jan 09 11:17:50 crc kubenswrapper[4727]: I0109 11:17:50.062421 4727 generic.go:334] "Generic (PLEG): container finished" podID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerID="8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62" exitCode=0 Jan 09 11:17:50 crc kubenswrapper[4727]: I0109 11:17:50.062558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerDied","Data":"8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62"} Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.510837 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.576779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.576864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.577037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.584299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm" (OuterVolumeSpecName: "kube-api-access-r8gkm") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "kube-api-access-r8gkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.610596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory" (OuterVolumeSpecName: "inventory") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.618879 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679118 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679161 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679178 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.108624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerDied","Data":"62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9"} Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.109793 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.109204 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.190096 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:52 crc kubenswrapper[4727]: E0109 11:17:52.190967 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.190984 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.191175 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.191948 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.197734 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198009 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198125 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198194 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.208438 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.400632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.401136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.416428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.511415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.895576 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.046098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.054051 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.133725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerStarted","Data":"34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab"} Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.034272 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.044516 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.143580 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerStarted","Data":"24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f"} Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.167865 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" podStartSLOduration=1.7342295330000002 podStartE2EDuration="2.16783265s" podCreationTimestamp="2026-01-09 11:17:52 +0000 UTC" firstStartedPulling="2026-01-09 11:17:52.898279985 +0000 UTC m=+1918.348184766" lastFinishedPulling="2026-01-09 11:17:53.331883102 +0000 UTC m=+1918.781787883" observedRunningTime="2026-01-09 11:17:54.162816588 +0000 UTC m=+1919.612721389" watchObservedRunningTime="2026-01-09 11:17:54.16783265 +0000 UTC m=+1919.617737501" Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.877078 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" path="/var/lib/kubelet/pods/10127ac2-1ffe-4ad6-b483-ff5952f88b4a/volumes" Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.877761 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" path="/var/lib/kubelet/pods/c95f5eef-fff8-427b-9318-ebfcf188f0a9/volumes" Jan 09 11:18:27 crc kubenswrapper[4727]: I0109 11:18:27.996966 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.000779 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.008671 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.198371 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.198610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.228833 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.333579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.838490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.383605 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.387351 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.403489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517585 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" exitCode=0 Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1"} Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517741 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"4296e666b17eebb6a3607981a9d50b3da27e937ddf21b913930259e0af4c499e"} Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.532992 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.533096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.533159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.636661 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.662459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.707864 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.253858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.531991 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" exitCode=0 Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.532059 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804"} Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.532095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"94dea0e522fa183593e3777105081b07c39058a2c243f7c9790b7aade563bd6c"} Jan 09 11:18:31 crc kubenswrapper[4727]: I0109 11:18:31.548672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} Jan 09 11:18:32 crc kubenswrapper[4727]: I0109 11:18:32.561485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.574676 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" exitCode=0 Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.574836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.586025 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" exitCode=0 Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.586091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} Jan 09 11:18:34 crc kubenswrapper[4727]: I0109 11:18:34.598042 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} Jan 09 11:18:34 crc kubenswrapper[4727]: I0109 11:18:34.621566 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kg722" podStartSLOduration=1.907251868 podStartE2EDuration="5.621537895s" podCreationTimestamp="2026-01-09 11:18:29 +0000 UTC" firstStartedPulling="2026-01-09 11:18:30.534157895 +0000 UTC m=+1955.984062666" lastFinishedPulling="2026-01-09 11:18:34.248443912 +0000 UTC m=+1959.698348693" observedRunningTime="2026-01-09 11:18:34.619442879 +0000 UTC m=+1960.069347660" watchObservedRunningTime="2026-01-09 11:18:34.621537895 +0000 UTC m=+1960.071442706" Jan 09 11:18:35 crc kubenswrapper[4727]: I0109 11:18:35.621157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} Jan 09 11:18:35 crc kubenswrapper[4727]: I0109 11:18:35.648204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmhd8" podStartSLOduration=3.791780057 podStartE2EDuration="8.648176225s" podCreationTimestamp="2026-01-09 11:18:27 +0000 UTC" firstStartedPulling="2026-01-09 11:18:29.519953456 +0000 UTC m=+1954.969858237" lastFinishedPulling="2026-01-09 11:18:34.376349624 +0000 UTC m=+1959.826254405" observedRunningTime="2026-01-09 11:18:35.643151762 +0000 UTC m=+1961.093056563" watchObservedRunningTime="2026-01-09 11:18:35.648176225 +0000 UTC m=+1961.098081006" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.057109 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.066820 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.333774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.333880 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.873613 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" path="/var/lib/kubelet/pods/fd540af1-9862-4759-ad16-587bbd49fea1/volumes" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.329958 4727 scope.go:117] "RemoveContainer" containerID="dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.388479 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmhd8" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" probeResult="failure" output=< Jan 09 11:18:39 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:18:39 crc kubenswrapper[4727]: > Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.414708 4727 scope.go:117] "RemoveContainer" containerID="f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.470114 4727 scope.go:117] "RemoveContainer" containerID="2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.708776 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.708854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.760848 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:40 crc kubenswrapper[4727]: I0109 11:18:40.727895 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:40 crc kubenswrapper[4727]: I0109 11:18:40.791950 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:42 crc kubenswrapper[4727]: I0109 11:18:42.712690 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kg722" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" containerID="cri-o://efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" gracePeriod=2 Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.173194 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.281721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.281787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.282054 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.282722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities" (OuterVolumeSpecName: "utilities") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.289200 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj" (OuterVolumeSpecName: "kube-api-access-2lxnj") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "kube-api-access-2lxnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.315732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384725 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384780 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384792 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725414 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" exitCode=0 Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725588 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"94dea0e522fa183593e3777105081b07c39058a2c243f7c9790b7aade563bd6c"} Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.726011 4727 scope.go:117] "RemoveContainer" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.753896 4727 scope.go:117] "RemoveContainer" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.770841 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.781197 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.796287 4727 scope.go:117] "RemoveContainer" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.836114 4727 scope.go:117] "RemoveContainer" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.837664 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": container with ID starting with efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb not found: ID does not exist" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.837774 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} err="failed to get container status \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": rpc error: code = NotFound desc = could not find container \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": container with ID starting with efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb not found: ID does not exist" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.837829 4727 scope.go:117] "RemoveContainer" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.838259 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": container with ID starting with 56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6 not found: ID does not exist" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838293 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} err="failed to get container status \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": rpc error: code = NotFound desc = could not find container \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": container with ID starting with 56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6 not found: ID does not exist" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838309 4727 scope.go:117] "RemoveContainer" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.838835 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": container with ID starting with 011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804 not found: ID does not exist" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838867 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804"} err="failed to get container status \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": rpc error: code = NotFound desc = could not find container \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": container with ID starting with 011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804 not found: ID does not exist" Jan 09 11:18:44 crc kubenswrapper[4727]: I0109 11:18:44.871618 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282e6323-e597-4905-a7d7-f885b7eff305" path="/var/lib/kubelet/pods/282e6323-e597-4905-a7d7-f885b7eff305/volumes" Jan 09 11:18:46 crc kubenswrapper[4727]: I0109 11:18:46.767843 4727 generic.go:334] "Generic (PLEG): container finished" podID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerID="24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f" exitCode=0 Jan 09 11:18:46 crc kubenswrapper[4727]: I0109 11:18:46.767923 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerDied","Data":"24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f"} Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.392072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.452003 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.472793 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.560259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj" (OuterVolumeSpecName: "kube-api-access-5l2cj") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "kube-api-access-5l2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.578591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory" (OuterVolumeSpecName: "inventory") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.580834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649369 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649420 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649439 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.801734 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.802795 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerDied","Data":"34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab"} Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.802865 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.827943 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.912682 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913262 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-utilities" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-utilities" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-content" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913330 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-content" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913348 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913355 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913382 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913666 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913703 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.914677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.916911 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.917390 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.918936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.921365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.924636 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.060905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.060991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.061062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163693 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163853 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.169091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.171201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.185019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.265175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.810501 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmhd8" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" containerID="cri-o://e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" gracePeriod=2 Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.868797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:49 crc kubenswrapper[4727]: W0109 11:18:49.879763 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod247ff33e_a764_4e75_9d54_2c45ae8d8ca7.slice/crio-9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a WatchSource:0}: Error finding container 9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a: Status 404 returned error can't find the container with id 9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.364097 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498778 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498884 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498943 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.499772 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities" (OuterVolumeSpecName: "utilities") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.506848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf" (OuterVolumeSpecName: "kube-api-access-d5kbf") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "kube-api-access-d5kbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.601538 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.602220 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.627563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.704054 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823784 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" exitCode=0 Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"4296e666b17eebb6a3607981a9d50b3da27e937ddf21b913930259e0af4c499e"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823946 4727 scope.go:117] "RemoveContainer" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.826049 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerStarted","Data":"721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.826110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerStarted","Data":"9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.854497 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" podStartSLOduration=2.210729444 podStartE2EDuration="2.854471821s" podCreationTimestamp="2026-01-09 11:18:48 +0000 UTC" firstStartedPulling="2026-01-09 11:18:49.886882368 +0000 UTC m=+1975.336787149" lastFinishedPulling="2026-01-09 11:18:50.530624745 +0000 UTC m=+1975.980529526" observedRunningTime="2026-01-09 11:18:50.845406681 +0000 UTC m=+1976.295311472" watchObservedRunningTime="2026-01-09 11:18:50.854471821 +0000 UTC m=+1976.304376602" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.856807 4727 scope.go:117] "RemoveContainer" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.897402 4727 scope.go:117] "RemoveContainer" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.924402 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.924457 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.934959 4727 scope.go:117] "RemoveContainer" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.936260 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": container with ID starting with e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67 not found: ID does not exist" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.936346 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} err="failed to get container status \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": rpc error: code = NotFound desc = could not find container \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": container with ID starting with e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67 not found: ID does not exist" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.936398 4727 scope.go:117] "RemoveContainer" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.936929 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": container with ID starting with faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2 not found: ID does not exist" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937041 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} err="failed to get container status \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": rpc error: code = NotFound desc = could not find container \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": container with ID starting with faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2 not found: ID does not exist" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937129 4727 scope.go:117] "RemoveContainer" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.937692 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": container with ID starting with 9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1 not found: ID does not exist" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937720 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1"} err="failed to get container status \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": rpc error: code = NotFound desc = could not find container \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": container with ID starting with 9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1 not found: ID does not exist" Jan 09 11:18:52 crc kubenswrapper[4727]: I0109 11:18:52.881311 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" path="/var/lib/kubelet/pods/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd/volumes" Jan 09 11:18:58 crc kubenswrapper[4727]: I0109 11:18:58.904712 4727 generic.go:334] "Generic (PLEG): container finished" podID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerID="721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98" exitCode=0 Jan 09 11:18:58 crc kubenswrapper[4727]: I0109 11:18:58.905485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerDied","Data":"721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98"} Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.343968 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.456679 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w" (OuterVolumeSpecName: "kube-api-access-f9v2w") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "kube-api-access-f9v2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.481750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.484795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552012 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552112 4727 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552153 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.926866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerDied","Data":"9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a"} Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.927361 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.927451 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.030602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031193 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-content" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031215 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-content" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031256 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031264 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031291 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031299 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-utilities" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031305 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-utilities" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043009 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043058 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047233 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047892 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047984 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.049285 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168645 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.271351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.271425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.272700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.277240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.284450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.290981 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.363806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.946122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:02 crc kubenswrapper[4727]: I0109 11:19:02.959852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerStarted","Data":"76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e"} Jan 09 11:19:03 crc kubenswrapper[4727]: I0109 11:19:03.971158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerStarted","Data":"b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7"} Jan 09 11:19:03 crc kubenswrapper[4727]: I0109 11:19:03.995368 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" podStartSLOduration=2.0832852060000002 podStartE2EDuration="2.995340597s" podCreationTimestamp="2026-01-09 11:19:01 +0000 UTC" firstStartedPulling="2026-01-09 11:19:01.945561341 +0000 UTC m=+1987.395466112" lastFinishedPulling="2026-01-09 11:19:02.857616722 +0000 UTC m=+1988.307521503" observedRunningTime="2026-01-09 11:19:03.990100408 +0000 UTC m=+1989.440005189" watchObservedRunningTime="2026-01-09 11:19:03.995340597 +0000 UTC m=+1989.445245388" Jan 09 11:19:12 crc kubenswrapper[4727]: I0109 11:19:12.045440 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f717d58-9e42-4359-89e8-70a60345d546" containerID="b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7" exitCode=0 Jan 09 11:19:12 crc kubenswrapper[4727]: I0109 11:19:12.045559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerDied","Data":"b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7"} Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.491122 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.595641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx" (OuterVolumeSpecName: "kube-api-access-fb8bx") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "kube-api-access-fb8bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.619672 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory" (OuterVolumeSpecName: "inventory") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.623085 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691318 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691682 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691759 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066772 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerDied","Data":"76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e"} Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066821 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066901 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.153173 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:14 crc kubenswrapper[4727]: E0109 11:19:14.153796 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.153819 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.154055 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.154884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.163549 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.165875 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.165992 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.166101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.166187 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.203793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.203928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.204020 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: E0109 11:19:14.297719 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f717d58_9e42_4359_89e8_70a60345d546.slice/crio-76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f717d58_9e42_4359_89e8_70a60345d546.slice\": RecentStats: unable to find data in memory cache]" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.314078 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.316053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.329372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.491113 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:15 crc kubenswrapper[4727]: I0109 11:19:15.097642 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:15 crc kubenswrapper[4727]: I0109 11:19:15.105891 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:19:16 crc kubenswrapper[4727]: I0109 11:19:16.090041 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerStarted","Data":"9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772"} Jan 09 11:19:17 crc kubenswrapper[4727]: I0109 11:19:17.101683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerStarted","Data":"5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c"} Jan 09 11:19:17 crc kubenswrapper[4727]: I0109 11:19:17.131340 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" podStartSLOduration=2.211464595 podStartE2EDuration="3.131313803s" podCreationTimestamp="2026-01-09 11:19:14 +0000 UTC" firstStartedPulling="2026-01-09 11:19:15.105675547 +0000 UTC m=+2000.555580328" lastFinishedPulling="2026-01-09 11:19:16.025524755 +0000 UTC m=+2001.475429536" observedRunningTime="2026-01-09 11:19:17.124083811 +0000 UTC m=+2002.573988592" watchObservedRunningTime="2026-01-09 11:19:17.131313803 +0000 UTC m=+2002.581218584" Jan 09 11:19:27 crc kubenswrapper[4727]: I0109 11:19:27.204646 4727 generic.go:334] "Generic (PLEG): container finished" podID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerID="5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c" exitCode=0 Jan 09 11:19:27 crc kubenswrapper[4727]: I0109 11:19:27.205537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerDied","Data":"5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c"} Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.660857 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782579 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.790364 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz" (OuterVolumeSpecName: "kube-api-access-q2grz") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "kube-api-access-q2grz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.815454 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory" (OuterVolumeSpecName: "inventory") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.815839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886469 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886529 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886545 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerDied","Data":"9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772"} Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225905 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225917 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.347751 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:29 crc kubenswrapper[4727]: E0109 11:19:29.348830 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.348856 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.349130 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.350102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.352789 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.352789 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355600 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355828 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355855 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355982 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.356196 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.357490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.358593 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.499981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500034 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500521 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603865 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604092 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604161 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604191 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.611074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.611133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.612122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.615873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.617141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618347 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618731 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.619223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.619471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.626620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.683854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:30 crc kubenswrapper[4727]: I0109 11:19:30.237314 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:31 crc kubenswrapper[4727]: I0109 11:19:31.249984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerStarted","Data":"b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e"} Jan 09 11:19:32 crc kubenswrapper[4727]: I0109 11:19:32.262986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerStarted","Data":"6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804"} Jan 09 11:19:32 crc kubenswrapper[4727]: I0109 11:19:32.293802 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" podStartSLOduration=2.410821419 podStartE2EDuration="3.293782188s" podCreationTimestamp="2026-01-09 11:19:29 +0000 UTC" firstStartedPulling="2026-01-09 11:19:30.243811197 +0000 UTC m=+2015.693715978" lastFinishedPulling="2026-01-09 11:19:31.126771956 +0000 UTC m=+2016.576676747" observedRunningTime="2026-01-09 11:19:32.289012841 +0000 UTC m=+2017.738917662" watchObservedRunningTime="2026-01-09 11:19:32.293782188 +0000 UTC m=+2017.743686969" Jan 09 11:19:39 crc kubenswrapper[4727]: I0109 11:19:39.404970 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:19:39 crc kubenswrapper[4727]: I0109 11:19:39.405908 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:09 crc kubenswrapper[4727]: I0109 11:20:09.405707 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:20:09 crc kubenswrapper[4727]: I0109 11:20:09.406433 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:10 crc kubenswrapper[4727]: I0109 11:20:10.662460 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerID="6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804" exitCode=0 Jan 09 11:20:10 crc kubenswrapper[4727]: I0109 11:20:10.662556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerDied","Data":"6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804"} Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.138746 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141883 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.142157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143585 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143620 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143753 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.153535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.153926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.154979 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155170 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155133 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2" (OuterVolumeSpecName: "kube-api-access-7bmj2") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "kube-api-access-7bmj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155239 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155263 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155945 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.157419 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.157649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.162780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.170723 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.226274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.226316 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory" (OuterVolumeSpecName: "inventory") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247683 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247736 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247754 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247770 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247783 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247796 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247810 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247824 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247836 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247848 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247862 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247876 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247889 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247903 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.698281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerDied","Data":"b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e"} Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.699081 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.698379 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.808782 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:12 crc kubenswrapper[4727]: E0109 11:20:12.809427 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.809456 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.810294 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.811289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.816352 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.816447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820426 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820571 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.831713 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860306 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961783 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961859 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.963605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967233 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967669 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.980306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.135531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.525481 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.711527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerStarted","Data":"b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d"} Jan 09 11:20:14 crc kubenswrapper[4727]: I0109 11:20:14.727061 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerStarted","Data":"71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf"} Jan 09 11:20:14 crc kubenswrapper[4727]: I0109 11:20:14.749036 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" podStartSLOduration=2.247235573 podStartE2EDuration="2.749005987s" podCreationTimestamp="2026-01-09 11:20:12 +0000 UTC" firstStartedPulling="2026-01-09 11:20:13.529900634 +0000 UTC m=+2058.979805415" lastFinishedPulling="2026-01-09 11:20:14.031671058 +0000 UTC m=+2059.481575829" observedRunningTime="2026-01-09 11:20:14.746826489 +0000 UTC m=+2060.196731310" watchObservedRunningTime="2026-01-09 11:20:14.749005987 +0000 UTC m=+2060.198910778" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.405294 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.406154 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.406240 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.407285 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.407357 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" gracePeriod=600 Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.997846 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" exitCode=0 Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.997903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.998304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.998347 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:21:20 crc kubenswrapper[4727]: I0109 11:21:20.462224 4727 generic.go:334] "Generic (PLEG): container finished" podID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerID="71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf" exitCode=0 Jan 09 11:21:20 crc kubenswrapper[4727]: I0109 11:21:20.462322 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerDied","Data":"71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf"} Jan 09 11:21:21 crc kubenswrapper[4727]: I0109 11:21:21.972545 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.074831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075328 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075702 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075838 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.076763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.082958 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl" (OuterVolumeSpecName: "kube-api-access-cx5hl") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "kube-api-access-cx5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.084833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.125852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.126402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory" (OuterVolumeSpecName: "inventory") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.144340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182002 4727 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182635 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182737 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182854 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182934 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerDied","Data":"b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d"} Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485879 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485891 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.599536 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:22 crc kubenswrapper[4727]: E0109 11:21:22.599984 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600003 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600193 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600866 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613467 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613818 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613989 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614126 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614648 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614928 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.615913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692399 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692541 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795894 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.807305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.815429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.941130 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:23 crc kubenswrapper[4727]: I0109 11:21:23.516106 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:24 crc kubenswrapper[4727]: I0109 11:21:24.512080 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerStarted","Data":"afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf"} Jan 09 11:21:25 crc kubenswrapper[4727]: I0109 11:21:25.543465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerStarted","Data":"917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048"} Jan 09 11:21:25 crc kubenswrapper[4727]: I0109 11:21:25.571229 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" podStartSLOduration=2.577331133 podStartE2EDuration="3.571194926s" podCreationTimestamp="2026-01-09 11:21:22 +0000 UTC" firstStartedPulling="2026-01-09 11:21:23.536390361 +0000 UTC m=+2128.986295162" lastFinishedPulling="2026-01-09 11:21:24.530254174 +0000 UTC m=+2129.980158955" observedRunningTime="2026-01-09 11:21:25.567532941 +0000 UTC m=+2131.017437732" watchObservedRunningTime="2026-01-09 11:21:25.571194926 +0000 UTC m=+2131.021099727" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.342798 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.346093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.354848 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467417 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467532 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570629 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.571248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.571411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.594717 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.672288 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:08 crc kubenswrapper[4727]: I0109 11:22:08.422203 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:08 crc kubenswrapper[4727]: E0109 11:22:08.854926 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b88c83_24e9_4f37_9671_0dc9d8c1abf1.slice/crio-3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b88c83_24e9_4f37_9671_0dc9d8c1abf1.slice/crio-conmon-3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002329 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" exitCode=0 Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f"} Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"0046b42f8e401447fa0ab1dc80943ca94acdf238cb3e97f8bdcadcde73dae8cd"} Jan 09 11:22:10 crc kubenswrapper[4727]: I0109 11:22:10.017857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} Jan 09 11:22:11 crc kubenswrapper[4727]: I0109 11:22:11.032210 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" exitCode=0 Jan 09 11:22:11 crc kubenswrapper[4727]: I0109 11:22:11.032312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} Jan 09 11:22:12 crc kubenswrapper[4727]: I0109 11:22:12.045981 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} Jan 09 11:22:12 crc kubenswrapper[4727]: I0109 11:22:12.070844 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l6sq5" podStartSLOduration=2.561301109 podStartE2EDuration="5.070819451s" podCreationTimestamp="2026-01-09 11:22:07 +0000 UTC" firstStartedPulling="2026-01-09 11:22:09.005632526 +0000 UTC m=+2174.455537307" lastFinishedPulling="2026-01-09 11:22:11.515150868 +0000 UTC m=+2176.965055649" observedRunningTime="2026-01-09 11:22:12.066950471 +0000 UTC m=+2177.516855252" watchObservedRunningTime="2026-01-09 11:22:12.070819451 +0000 UTC m=+2177.520724232" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.112311 4727 generic.go:334] "Generic (PLEG): container finished" podID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerID="917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048" exitCode=0 Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.112450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerDied","Data":"917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048"} Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.673114 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.673183 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.750873 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.188979 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.638557 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.735860 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736445 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736583 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.743187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.744320 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p" (OuterVolumeSpecName: "kube-api-access-7w49p") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "kube-api-access-7w49p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.769719 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.770299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.772806 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.773298 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory" (OuterVolumeSpecName: "inventory") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840203 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840251 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840267 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840282 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840298 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840311 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerDied","Data":"afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf"} Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138071 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138105 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.268540 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:19 crc kubenswrapper[4727]: E0109 11:22:19.269219 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.269241 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.269609 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.270532 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276107 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276370 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276367 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276675 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.282643 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452534 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.459138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.459412 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.460200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.463032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.476919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.603966 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.914034 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.146300 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l6sq5" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" containerID="cri-o://324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" gracePeriod=2 Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.185181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.524107 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.681538 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities" (OuterVolumeSpecName: "utilities") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.689008 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl" (OuterVolumeSpecName: "kube-api-access-kcqrl") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "kube-api-access-kcqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.736874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.783213 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.784158 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.784186 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179670 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" exitCode=0 Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179793 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"0046b42f8e401447fa0ab1dc80943ca94acdf238cb3e97f8bdcadcde73dae8cd"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179841 4727 scope.go:117] "RemoveContainer" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.181942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerStarted","Data":"6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.223814 4727 scope.go:117] "RemoveContainer" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.232546 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.241606 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.245303 4727 scope.go:117] "RemoveContainer" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.269644 4727 scope.go:117] "RemoveContainer" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.270217 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": container with ID starting with 324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd not found: ID does not exist" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.270263 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} err="failed to get container status \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": rpc error: code = NotFound desc = could not find container \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": container with ID starting with 324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd not found: ID does not exist" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.270297 4727 scope.go:117] "RemoveContainer" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.270942 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": container with ID starting with a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d not found: ID does not exist" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.271112 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} err="failed to get container status \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": rpc error: code = NotFound desc = could not find container \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": container with ID starting with a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d not found: ID does not exist" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.271225 4727 scope.go:117] "RemoveContainer" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.272018 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": container with ID starting with 3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f not found: ID does not exist" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.272085 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f"} err="failed to get container status \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": rpc error: code = NotFound desc = could not find container \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": container with ID starting with 3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f not found: ID does not exist" Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.194771 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerStarted","Data":"c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027"} Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.224083 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" podStartSLOduration=2.394051873 podStartE2EDuration="3.224058282s" podCreationTimestamp="2026-01-09 11:22:19 +0000 UTC" firstStartedPulling="2026-01-09 11:22:20.183653701 +0000 UTC m=+2185.633558482" lastFinishedPulling="2026-01-09 11:22:21.0136601 +0000 UTC m=+2186.463564891" observedRunningTime="2026-01-09 11:22:22.212232516 +0000 UTC m=+2187.662137297" watchObservedRunningTime="2026-01-09 11:22:22.224058282 +0000 UTC m=+2187.673963063" Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.878997 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" path="/var/lib/kubelet/pods/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1/volumes" Jan 09 11:22:39 crc kubenswrapper[4727]: I0109 11:22:39.405342 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:22:39 crc kubenswrapper[4727]: I0109 11:22:39.406421 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.576715 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578274 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578292 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-utilities" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578301 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-utilities" Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-content" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578358 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-content" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578648 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.580371 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.586728 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.695834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.695911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.696001 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.798730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.798938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799075 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.827943 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.950352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:56 crc kubenswrapper[4727]: I0109 11:22:56.577071 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.566888 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" exitCode=0 Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.566944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3"} Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.567381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"727a60c621ddad51cb136733c31b3847261b5d7b94b13ab66e3ea2faa30e3d2b"} Jan 09 11:22:59 crc kubenswrapper[4727]: I0109 11:22:59.591379 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} Jan 09 11:23:00 crc kubenswrapper[4727]: I0109 11:23:00.607177 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" exitCode=0 Jan 09 11:23:00 crc kubenswrapper[4727]: I0109 11:23:00.607345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} Jan 09 11:23:01 crc kubenswrapper[4727]: I0109 11:23:01.620478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} Jan 09 11:23:01 crc kubenswrapper[4727]: I0109 11:23:01.646058 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ssm76" podStartSLOduration=3.071977875 podStartE2EDuration="6.646028651s" podCreationTimestamp="2026-01-09 11:22:55 +0000 UTC" firstStartedPulling="2026-01-09 11:22:57.569038668 +0000 UTC m=+2223.018943449" lastFinishedPulling="2026-01-09 11:23:01.143089444 +0000 UTC m=+2226.592994225" observedRunningTime="2026-01-09 11:23:01.64371864 +0000 UTC m=+2227.093623421" watchObservedRunningTime="2026-01-09 11:23:01.646028651 +0000 UTC m=+2227.095933432" Jan 09 11:23:05 crc kubenswrapper[4727]: I0109 11:23:05.951166 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:05 crc kubenswrapper[4727]: I0109 11:23:05.954271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.013874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.730051 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.782996 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:08 crc kubenswrapper[4727]: I0109 11:23:08.696828 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ssm76" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" containerID="cri-o://cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" gracePeriod=2 Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.149227 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.329952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.330268 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.330325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.331641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities" (OuterVolumeSpecName: "utilities") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.338844 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr" (OuterVolumeSpecName: "kube-api-access-jhbrr") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "kube-api-access-jhbrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.385386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.405935 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.406020 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433099 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433135 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433145 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711363 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" exitCode=0 Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711531 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"727a60c621ddad51cb136733c31b3847261b5d7b94b13ab66e3ea2faa30e3d2b"} Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711589 4727 scope.go:117] "RemoveContainer" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.751315 4727 scope.go:117] "RemoveContainer" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.758702 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.768840 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.776797 4727 scope.go:117] "RemoveContainer" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.820114 4727 scope.go:117] "RemoveContainer" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.821415 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": container with ID starting with cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad not found: ID does not exist" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.821482 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} err="failed to get container status \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": rpc error: code = NotFound desc = could not find container \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": container with ID starting with cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad not found: ID does not exist" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.821585 4727 scope.go:117] "RemoveContainer" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.822136 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": container with ID starting with 72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe not found: ID does not exist" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822186 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} err="failed to get container status \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": rpc error: code = NotFound desc = could not find container \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": container with ID starting with 72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe not found: ID does not exist" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822210 4727 scope.go:117] "RemoveContainer" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.822755 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": container with ID starting with 2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3 not found: ID does not exist" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822788 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3"} err="failed to get container status \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": rpc error: code = NotFound desc = could not find container \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": container with ID starting with 2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3 not found: ID does not exist" Jan 09 11:23:10 crc kubenswrapper[4727]: I0109 11:23:10.874810 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" path="/var/lib/kubelet/pods/a547e222-4018-4b48-b858-e6dd84f85cb1/volumes" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.404617 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.405403 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.405465 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.406345 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.406407 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" gracePeriod=600 Jan 09 11:23:39 crc kubenswrapper[4727]: E0109 11:23:39.539381 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.113874 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" exitCode=0 Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.113983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.114418 4727 scope.go:117] "RemoveContainer" containerID="c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.116112 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:23:40 crc kubenswrapper[4727]: E0109 11:23:40.116495 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:23:52 crc kubenswrapper[4727]: I0109 11:23:52.861029 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:23:52 crc kubenswrapper[4727]: E0109 11:23:52.863004 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:06 crc kubenswrapper[4727]: I0109 11:24:06.860682 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:06 crc kubenswrapper[4727]: E0109 11:24:06.863730 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:20 crc kubenswrapper[4727]: I0109 11:24:20.868405 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:20 crc kubenswrapper[4727]: E0109 11:24:20.875193 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:35 crc kubenswrapper[4727]: I0109 11:24:35.860591 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:35 crc kubenswrapper[4727]: E0109 11:24:35.861374 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:48 crc kubenswrapper[4727]: I0109 11:24:48.860377 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:48 crc kubenswrapper[4727]: E0109 11:24:48.861194 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:59 crc kubenswrapper[4727]: I0109 11:24:59.860834 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:59 crc kubenswrapper[4727]: E0109 11:24:59.862019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:12 crc kubenswrapper[4727]: I0109 11:25:12.861571 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:12 crc kubenswrapper[4727]: E0109 11:25:12.862767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:27 crc kubenswrapper[4727]: I0109 11:25:27.861211 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:27 crc kubenswrapper[4727]: E0109 11:25:27.862230 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:40 crc kubenswrapper[4727]: I0109 11:25:40.860774 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:40 crc kubenswrapper[4727]: E0109 11:25:40.862809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:54 crc kubenswrapper[4727]: I0109 11:25:54.860740 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:54 crc kubenswrapper[4727]: E0109 11:25:54.862057 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:07 crc kubenswrapper[4727]: I0109 11:26:07.861749 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:07 crc kubenswrapper[4727]: E0109 11:26:07.862708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:18 crc kubenswrapper[4727]: I0109 11:26:18.867597 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:18 crc kubenswrapper[4727]: E0109 11:26:18.870113 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:29 crc kubenswrapper[4727]: I0109 11:26:29.861240 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:29 crc kubenswrapper[4727]: E0109 11:26:29.862246 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:40 crc kubenswrapper[4727]: I0109 11:26:40.861380 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:40 crc kubenswrapper[4727]: E0109 11:26:40.862366 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:50 crc kubenswrapper[4727]: I0109 11:26:50.174837 4727 generic.go:334] "Generic (PLEG): container finished" podID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerID="c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027" exitCode=0 Jan 09 11:26:50 crc kubenswrapper[4727]: I0109 11:26:50.174919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerDied","Data":"c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027"} Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.691131 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.894861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895559 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895746 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.905838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql" (OuterVolumeSpecName: "kube-api-access-fb9ql") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "kube-api-access-fb9ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.907272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.932392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.938648 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory" (OuterVolumeSpecName: "inventory") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.938967 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998200 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998243 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998253 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998264 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998274 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.202769 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerDied","Data":"6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a"} Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.203171 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.202849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.335563 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336648 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336671 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336721 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336729 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336782 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-content" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336796 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-content" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-utilities" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336840 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-utilities" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.337156 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.337181 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.338344 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.345918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346406 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346553 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346711 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347645 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.353572 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406576 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406816 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509222 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509638 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.512293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.515754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.515969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516410 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.517460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.517490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.529733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.667547 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:53 crc kubenswrapper[4727]: I0109 11:26:53.268585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:53 crc kubenswrapper[4727]: I0109 11:26:53.270489 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:26:54 crc kubenswrapper[4727]: I0109 11:26:54.224662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerStarted","Data":"e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122"} Jan 09 11:26:54 crc kubenswrapper[4727]: I0109 11:26:54.877162 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:54 crc kubenswrapper[4727]: E0109 11:26:54.878390 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:55 crc kubenswrapper[4727]: I0109 11:26:55.235863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerStarted","Data":"b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b"} Jan 09 11:26:55 crc kubenswrapper[4727]: I0109 11:26:55.264377 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" podStartSLOduration=2.470818497 podStartE2EDuration="3.264353663s" podCreationTimestamp="2026-01-09 11:26:52 +0000 UTC" firstStartedPulling="2026-01-09 11:26:53.269456649 +0000 UTC m=+2458.719361430" lastFinishedPulling="2026-01-09 11:26:54.062991825 +0000 UTC m=+2459.512896596" observedRunningTime="2026-01-09 11:26:55.264039115 +0000 UTC m=+2460.713943946" watchObservedRunningTime="2026-01-09 11:26:55.264353663 +0000 UTC m=+2460.714258454" Jan 09 11:27:08 crc kubenswrapper[4727]: I0109 11:27:08.861162 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:08 crc kubenswrapper[4727]: E0109 11:27:08.862596 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:19 crc kubenswrapper[4727]: I0109 11:27:19.861656 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:19 crc kubenswrapper[4727]: E0109 11:27:19.863590 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:31 crc kubenswrapper[4727]: I0109 11:27:31.861245 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:31 crc kubenswrapper[4727]: E0109 11:27:31.862281 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:43 crc kubenswrapper[4727]: I0109 11:27:43.861160 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:43 crc kubenswrapper[4727]: E0109 11:27:43.862251 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:58 crc kubenswrapper[4727]: I0109 11:27:58.860896 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:58 crc kubenswrapper[4727]: E0109 11:27:58.861881 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:09 crc kubenswrapper[4727]: I0109 11:28:09.861031 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:09 crc kubenswrapper[4727]: E0109 11:28:09.863165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:22 crc kubenswrapper[4727]: I0109 11:28:22.860951 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:22 crc kubenswrapper[4727]: E0109 11:28:22.863436 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:37 crc kubenswrapper[4727]: I0109 11:28:37.861772 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:37 crc kubenswrapper[4727]: E0109 11:28:37.862991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.938229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.941959 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.957009 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079722 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.182634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.182642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.218399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.277255 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.895428 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:43 crc kubenswrapper[4727]: I0109 11:28:43.358129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"2dbfd7c16220db57790802d1d9d60761a735bdf5de8eb47bf70b9c8a4a1de75b"} Jan 09 11:28:44 crc kubenswrapper[4727]: I0109 11:28:44.367708 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" exitCode=0 Jan 09 11:28:44 crc kubenswrapper[4727]: I0109 11:28:44.367801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b"} Jan 09 11:28:46 crc kubenswrapper[4727]: I0109 11:28:46.393175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} Jan 09 11:28:47 crc kubenswrapper[4727]: I0109 11:28:47.404712 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" exitCode=0 Jan 09 11:28:47 crc kubenswrapper[4727]: I0109 11:28:47.404787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.429450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.462483 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hgk2v" podStartSLOduration=4.624964721 podStartE2EDuration="8.462453212s" podCreationTimestamp="2026-01-09 11:28:41 +0000 UTC" firstStartedPulling="2026-01-09 11:28:44.369783261 +0000 UTC m=+2569.819688042" lastFinishedPulling="2026-01-09 11:28:48.207271752 +0000 UTC m=+2573.657176533" observedRunningTime="2026-01-09 11:28:49.449242657 +0000 UTC m=+2574.899147448" watchObservedRunningTime="2026-01-09 11:28:49.462453212 +0000 UTC m=+2574.912357993" Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.861073 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:50 crc kubenswrapper[4727]: I0109 11:28:50.445711 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} Jan 09 11:28:52 crc kubenswrapper[4727]: I0109 11:28:52.278146 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:52 crc kubenswrapper[4727]: I0109 11:28:52.279287 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:53 crc kubenswrapper[4727]: I0109 11:28:53.329544 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hgk2v" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" probeResult="failure" output=< Jan 09 11:28:53 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:28:53 crc kubenswrapper[4727]: > Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.329946 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.386464 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.577940 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:03 crc kubenswrapper[4727]: I0109 11:29:03.580161 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hgk2v" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" containerID="cri-o://811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" gracePeriod=2 Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.195985 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302377 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.303456 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities" (OuterVolumeSpecName: "utilities") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.310848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6" (OuterVolumeSpecName: "kube-api-access-mbpb6") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "kube-api-access-mbpb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.405792 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.406351 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.435043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.508831 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595013 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" exitCode=0 Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595196 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"2dbfd7c16220db57790802d1d9d60761a735bdf5de8eb47bf70b9c8a4a1de75b"} Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595268 4727 scope.go:117] "RemoveContainer" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.623105 4727 scope.go:117] "RemoveContainer" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.655529 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.668121 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.672446 4727 scope.go:117] "RemoveContainer" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.736838 4727 scope.go:117] "RemoveContainer" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.737763 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": container with ID starting with 811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38 not found: ID does not exist" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.737803 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} err="failed to get container status \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": rpc error: code = NotFound desc = could not find container \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": container with ID starting with 811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38 not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.737843 4727 scope.go:117] "RemoveContainer" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.747843 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": container with ID starting with 1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60 not found: ID does not exist" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.747902 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} err="failed to get container status \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": rpc error: code = NotFound desc = could not find container \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": container with ID starting with 1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60 not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.747942 4727 scope.go:117] "RemoveContainer" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.748408 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": container with ID starting with bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b not found: ID does not exist" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.748436 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b"} err="failed to get container status \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": rpc error: code = NotFound desc = could not find container \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": container with ID starting with bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.874390 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" path="/var/lib/kubelet/pods/a561451a-0ba0-48cb-bf09-b9a12d49c7ef/volumes" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.257076 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258391 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258417 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-utilities" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-utilities" Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258471 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-content" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258480 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-content" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258710 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.263956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.273170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379487 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.482569 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.482650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.508491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.586646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.140224 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925127 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b" exitCode=0 Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925190 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b"} Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925228 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerStarted","Data":"5196fa7b96d91e2af73fde39e5560330346d1e1ff711007beb9c427b472ce53d"} Jan 09 11:29:18 crc kubenswrapper[4727]: I0109 11:29:18.947549 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84" exitCode=0 Jan 09 11:29:18 crc kubenswrapper[4727]: I0109 11:29:18.947607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84"} Jan 09 11:29:19 crc kubenswrapper[4727]: I0109 11:29:19.969780 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerStarted","Data":"1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3"} Jan 09 11:29:19 crc kubenswrapper[4727]: I0109 11:29:19.998252 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mj2kv" podStartSLOduration=2.339687435 podStartE2EDuration="4.998226166s" podCreationTimestamp="2026-01-09 11:29:15 +0000 UTC" firstStartedPulling="2026-01-09 11:29:16.927127065 +0000 UTC m=+2602.377031846" lastFinishedPulling="2026-01-09 11:29:19.585665796 +0000 UTC m=+2605.035570577" observedRunningTime="2026-01-09 11:29:19.988094929 +0000 UTC m=+2605.437999710" watchObservedRunningTime="2026-01-09 11:29:19.998226166 +0000 UTC m=+2605.448130947" Jan 09 11:29:20 crc kubenswrapper[4727]: I0109 11:29:20.981997 4727 generic.go:334] "Generic (PLEG): container finished" podID="291b6783-3c71-4449-b696-27c7c340c41a" containerID="b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b" exitCode=0 Jan 09 11:29:20 crc kubenswrapper[4727]: I0109 11:29:20.982054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerDied","Data":"b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b"} Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.474872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551091 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551258 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551285 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551384 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551573 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551670 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.559409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h" (OuterVolumeSpecName: "kube-api-access-sbt5h") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "kube-api-access-sbt5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.570941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.584918 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.588692 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.594404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.597776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory" (OuterVolumeSpecName: "inventory") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.598082 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.603065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.612139 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.654954 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655000 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655010 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655018 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655026 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655034 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655044 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655052 4727 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655061 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016650 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerDied","Data":"e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122"} Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016729 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.133206 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:23 crc kubenswrapper[4727]: E0109 11:29:23.133798 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.133825 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.134035 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.134854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142032 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142282 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142403 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.165425 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.268658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269336 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269458 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.270221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372831 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.373011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374093 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.377962 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.377993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.378385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.378882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.380767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.384446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.402144 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.465069 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:24 crc kubenswrapper[4727]: I0109 11:29:24.013461 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:24 crc kubenswrapper[4727]: I0109 11:29:24.032438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerStarted","Data":"5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae"} Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.043947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerStarted","Data":"d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f"} Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.082840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" podStartSLOduration=1.6523808519999998 podStartE2EDuration="2.082813432s" podCreationTimestamp="2026-01-09 11:29:23 +0000 UTC" firstStartedPulling="2026-01-09 11:29:24.018437711 +0000 UTC m=+2609.468342492" lastFinishedPulling="2026-01-09 11:29:24.448870291 +0000 UTC m=+2609.898775072" observedRunningTime="2026-01-09 11:29:25.067338215 +0000 UTC m=+2610.517243026" watchObservedRunningTime="2026-01-09 11:29:25.082813432 +0000 UTC m=+2610.532718213" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.586780 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.586916 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.641689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:26 crc kubenswrapper[4727]: I0109 11:29:26.102778 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:26 crc kubenswrapper[4727]: I0109 11:29:26.158254 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:28 crc kubenswrapper[4727]: I0109 11:29:28.072953 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mj2kv" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" containerID="cri-o://1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" gracePeriod=2 Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.084226 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" exitCode=0 Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.084319 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3"} Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.202956 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.309794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities" (OuterVolumeSpecName: "utilities") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.317295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk" (OuterVolumeSpecName: "kube-api-access-wqgzk") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "kube-api-access-wqgzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.334872 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413385 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413439 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413454 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.099708 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"5196fa7b96d91e2af73fde39e5560330346d1e1ff711007beb9c427b472ce53d"} Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.099834 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.100142 4727 scope.go:117] "RemoveContainer" containerID="1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.131744 4727 scope.go:117] "RemoveContainer" containerID="99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.161776 4727 scope.go:117] "RemoveContainer" containerID="d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.172301 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.184030 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.875336 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" path="/var/lib/kubelet/pods/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b/volumes" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.158365 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.159998 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160018 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.160058 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-utilities" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160067 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-utilities" Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.160101 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-content" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160109 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-content" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160361 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.161495 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.168963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.184127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.200339 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.224776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.224917 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.225014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.329747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.342794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.347184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.523082 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.015012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.431684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerStarted","Data":"2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514"} Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.431745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerStarted","Data":"ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732"} Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.490773 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" podStartSLOduration=1.490746046 podStartE2EDuration="1.490746046s" podCreationTimestamp="2026-01-09 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:30:01.466939699 +0000 UTC m=+2646.916844490" watchObservedRunningTime="2026-01-09 11:30:01.490746046 +0000 UTC m=+2646.940650817" Jan 09 11:30:02 crc kubenswrapper[4727]: I0109 11:30:02.442944 4727 generic.go:334] "Generic (PLEG): container finished" podID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerID="2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514" exitCode=0 Jan 09 11:30:02 crc kubenswrapper[4727]: I0109 11:30:02.443016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerDied","Data":"2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514"} Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.816964 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905520 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.906608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.914474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5" (OuterVolumeSpecName: "kube-api-access-8kqh5") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "kube-api-access-8kqh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.914603 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009668 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009865 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009878 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerDied","Data":"ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732"} Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464629 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464694 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.902523 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.910932 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 11:30:06 crc kubenswrapper[4727]: I0109 11:30:06.871499 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" path="/var/lib/kubelet/pods/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd/volumes" Jan 09 11:30:39 crc kubenswrapper[4727]: I0109 11:30:39.897685 4727 scope.go:117] "RemoveContainer" containerID="f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d" Jan 09 11:31:09 crc kubenswrapper[4727]: I0109 11:31:09.405418 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:31:09 crc kubenswrapper[4727]: I0109 11:31:09.406195 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:31:39 crc kubenswrapper[4727]: I0109 11:31:39.404933 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:31:39 crc kubenswrapper[4727]: I0109 11:31:39.406539 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:31:59 crc kubenswrapper[4727]: I0109 11:31:59.628753 4727 generic.go:334] "Generic (PLEG): container finished" podID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerID="d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f" exitCode=0 Jan 09 11:31:59 crc kubenswrapper[4727]: I0109 11:31:59.628825 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerDied","Data":"d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f"} Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.105077 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275821 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275898 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276026 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276280 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.283273 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.285373 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2" (OuterVolumeSpecName: "kube-api-access-kprl2") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "kube-api-access-kprl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.307333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.311535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.314697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.315567 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory" (OuterVolumeSpecName: "inventory") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.316942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378799 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378844 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378855 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378873 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378889 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378902 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378915 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerDied","Data":"5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae"} Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652541 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652145 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.404935 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.405909 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.406012 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.407099 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.407172 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" gracePeriod=600 Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735262 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" exitCode=0 Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735377 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:32:10 crc kubenswrapper[4727]: I0109 11:32:10.746192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.849663 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:38 crc kubenswrapper[4727]: E0109 11:32:38.851285 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: E0109 11:32:38.851347 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851356 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851676 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851696 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.853564 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.877081 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.019601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.019669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.020316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123543 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.124567 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.124777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.158051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.197605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.716343 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:40 crc kubenswrapper[4727]: I0109 11:32:40.039456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8"} Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.051427 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e" exitCode=0 Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.051539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e"} Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.054205 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:32:42 crc kubenswrapper[4727]: I0109 11:32:42.064675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e"} Jan 09 11:32:43 crc kubenswrapper[4727]: I0109 11:32:43.077392 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e" exitCode=0 Jan 09 11:32:43 crc kubenswrapper[4727]: I0109 11:32:43.077460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e"} Jan 09 11:32:44 crc kubenswrapper[4727]: I0109 11:32:44.088869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0"} Jan 09 11:32:44 crc kubenswrapper[4727]: I0109 11:32:44.117732 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xm82r" podStartSLOduration=3.459190481 podStartE2EDuration="6.117711812s" podCreationTimestamp="2026-01-09 11:32:38 +0000 UTC" firstStartedPulling="2026-01-09 11:32:41.053900843 +0000 UTC m=+2806.503805624" lastFinishedPulling="2026-01-09 11:32:43.712422174 +0000 UTC m=+2809.162326955" observedRunningTime="2026-01-09 11:32:44.114104437 +0000 UTC m=+2809.564009218" watchObservedRunningTime="2026-01-09 11:32:44.117711812 +0000 UTC m=+2809.567616593" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.198710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.199313 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.268585 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:50 crc kubenswrapper[4727]: I0109 11:32:50.240442 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:50 crc kubenswrapper[4727]: I0109 11:32:50.300534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:52 crc kubenswrapper[4727]: I0109 11:32:52.202237 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xm82r" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" containerID="cri-o://5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" gracePeriod=2 Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216160 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" exitCode=0 Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0"} Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8"} Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216685 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.256001 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.463612 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.464826 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.464865 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.465755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities" (OuterVolumeSpecName: "utilities") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.470899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p" (OuterVolumeSpecName: "kube-api-access-m7g9p") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "kube-api-access-m7g9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.507980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566861 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566903 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566917 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.225965 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.270464 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.281020 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.895705 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" path="/var/lib/kubelet/pods/9e114ddd-6947-4f8d-9679-8c56d3c33bd9/volumes" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.735394 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736736 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-content" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736755 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-content" Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736783 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-utilities" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736793 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-utilities" Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736840 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736848 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.737121 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.738091 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.756473 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774480 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774498 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774870 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghr4t" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.777953 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908952 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909338 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012005 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012385 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.014492 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.042702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.052096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.104243 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.577869 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:33:01 crc kubenswrapper[4727]: I0109 11:33:01.306871 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerStarted","Data":"8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267"} Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.265921 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.276084 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.291698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468987 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571360 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571491 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.572059 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.572178 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.599439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.620112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:11 crc kubenswrapper[4727]: I0109 11:33:11.210435 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:11 crc kubenswrapper[4727]: I0109 11:33:11.446804 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"550eac94714d5bbaee91f4e9a318d037390474a281cd19f3ba024ffdf68b2b5a"} Jan 09 11:33:15 crc kubenswrapper[4727]: I0109 11:33:15.505779 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" exitCode=0 Jan 09 11:33:15 crc kubenswrapper[4727]: I0109 11:33:15.505991 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b"} Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.495524 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.496219 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqnbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.497367 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.732333 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" Jan 09 11:33:36 crc kubenswrapper[4727]: I0109 11:33:36.742026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} Jan 09 11:33:37 crc kubenswrapper[4727]: I0109 11:33:37.753305 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" exitCode=0 Jan 09 11:33:37 crc kubenswrapper[4727]: I0109 11:33:37.753384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} Jan 09 11:33:39 crc kubenswrapper[4727]: I0109 11:33:39.776886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} Jan 09 11:33:39 crc kubenswrapper[4727]: I0109 11:33:39.799361 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pm9fv" podStartSLOduration=26.429445206 podStartE2EDuration="29.799334195s" podCreationTimestamp="2026-01-09 11:33:10 +0000 UTC" firstStartedPulling="2026-01-09 11:33:35.357367348 +0000 UTC m=+2860.807272129" lastFinishedPulling="2026-01-09 11:33:38.727256337 +0000 UTC m=+2864.177161118" observedRunningTime="2026-01-09 11:33:39.796089937 +0000 UTC m=+2865.245994718" watchObservedRunningTime="2026-01-09 11:33:39.799334195 +0000 UTC m=+2865.249238976" Jan 09 11:33:40 crc kubenswrapper[4727]: I0109 11:33:40.620891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:40 crc kubenswrapper[4727]: I0109 11:33:40.620951 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:41 crc kubenswrapper[4727]: I0109 11:33:41.679467 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pm9fv" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" probeResult="failure" output=< Jan 09 11:33:41 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:33:41 crc kubenswrapper[4727]: > Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.698832 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.761809 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.946803 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.898620 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerStarted","Data":"6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc"} Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.898799 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pm9fv" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" containerID="cri-o://ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" gracePeriod=2 Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.938705 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.021014776 podStartE2EDuration="53.938675957s" podCreationTimestamp="2026-01-09 11:32:58 +0000 UTC" firstStartedPulling="2026-01-09 11:33:00.58936034 +0000 UTC m=+2826.039265121" lastFinishedPulling="2026-01-09 11:33:50.507021511 +0000 UTC m=+2875.956926302" observedRunningTime="2026-01-09 11:33:51.915618443 +0000 UTC m=+2877.365523254" watchObservedRunningTime="2026-01-09 11:33:51.938675957 +0000 UTC m=+2877.388580748" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.403755 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.504277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities" (OuterVolumeSpecName: "utilities") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.528120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk" (OuterVolumeSpecName: "kube-api-access-6cctk") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "kube-api-access-6cctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.563445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602563 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602630 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602650 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910829 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" exitCode=0 Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910915 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"550eac94714d5bbaee91f4e9a318d037390474a281cd19f3ba024ffdf68b2b5a"} Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910967 4727 scope.go:117] "RemoveContainer" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.946068 4727 scope.go:117] "RemoveContainer" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.949463 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.960839 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.982666 4727 scope.go:117] "RemoveContainer" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043144 4727 scope.go:117] "RemoveContainer" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.043892 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": container with ID starting with ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8 not found: ID does not exist" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043960 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} err="failed to get container status \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": rpc error: code = NotFound desc = could not find container \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": container with ID starting with ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8 not found: ID does not exist" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043999 4727 scope.go:117] "RemoveContainer" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.044405 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": container with ID starting with 4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720 not found: ID does not exist" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.044437 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} err="failed to get container status \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": rpc error: code = NotFound desc = could not find container \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": container with ID starting with 4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720 not found: ID does not exist" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.044460 4727 scope.go:117] "RemoveContainer" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.048864 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": container with ID starting with b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b not found: ID does not exist" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.048909 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b"} err="failed to get container status \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": rpc error: code = NotFound desc = could not find container \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": container with ID starting with b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b not found: ID does not exist" Jan 09 11:33:54 crc kubenswrapper[4727]: I0109 11:33:54.872955 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" path="/var/lib/kubelet/pods/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb/volumes" Jan 09 11:34:39 crc kubenswrapper[4727]: I0109 11:34:39.405645 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:34:39 crc kubenswrapper[4727]: I0109 11:34:39.406484 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:09 crc kubenswrapper[4727]: I0109 11:35:09.405157 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:35:09 crc kubenswrapper[4727]: I0109 11:35:09.405803 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.405329 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.406304 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.406385 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.407451 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.407528 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" gracePeriod=600 Jan 09 11:35:39 crc kubenswrapper[4727]: E0109 11:35:39.547953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.093532 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" exitCode=0 Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.093656 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.094879 4727 scope.go:117] "RemoveContainer" containerID="045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.095005 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:35:40 crc kubenswrapper[4727]: E0109 11:35:40.095298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:35:53 crc kubenswrapper[4727]: I0109 11:35:53.861288 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:35:53 crc kubenswrapper[4727]: E0109 11:35:53.862682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:04 crc kubenswrapper[4727]: I0109 11:36:04.883014 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:04 crc kubenswrapper[4727]: E0109 11:36:04.884009 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:18 crc kubenswrapper[4727]: I0109 11:36:18.860690 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:18 crc kubenswrapper[4727]: E0109 11:36:18.861902 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:29 crc kubenswrapper[4727]: I0109 11:36:29.860955 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:29 crc kubenswrapper[4727]: E0109 11:36:29.864139 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:42 crc kubenswrapper[4727]: I0109 11:36:42.862150 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:42 crc kubenswrapper[4727]: E0109 11:36:42.863181 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:56 crc kubenswrapper[4727]: I0109 11:36:56.907596 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:56 crc kubenswrapper[4727]: E0109 11:36:56.908777 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:09 crc kubenswrapper[4727]: I0109 11:37:09.860898 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:09 crc kubenswrapper[4727]: E0109 11:37:09.862109 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:23 crc kubenswrapper[4727]: I0109 11:37:23.860489 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:23 crc kubenswrapper[4727]: E0109 11:37:23.863169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:38 crc kubenswrapper[4727]: I0109 11:37:38.861274 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:38 crc kubenswrapper[4727]: E0109 11:37:38.862142 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:49 crc kubenswrapper[4727]: I0109 11:37:49.860731 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:49 crc kubenswrapper[4727]: E0109 11:37:49.861685 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:03 crc kubenswrapper[4727]: I0109 11:38:03.860960 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:03 crc kubenswrapper[4727]: E0109 11:38:03.862477 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:15 crc kubenswrapper[4727]: I0109 11:38:15.860691 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:15 crc kubenswrapper[4727]: E0109 11:38:15.867753 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:26 crc kubenswrapper[4727]: I0109 11:38:26.861691 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:26 crc kubenswrapper[4727]: E0109 11:38:26.862976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:40 crc kubenswrapper[4727]: I0109 11:38:40.152104 4727 scope.go:117] "RemoveContainer" containerID="f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e" Jan 09 11:38:40 crc kubenswrapper[4727]: I0109 11:38:40.860931 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:40 crc kubenswrapper[4727]: E0109 11:38:40.861612 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:52 crc kubenswrapper[4727]: I0109 11:38:52.861090 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:52 crc kubenswrapper[4727]: E0109 11:38:52.862036 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:04 crc kubenswrapper[4727]: I0109 11:39:04.869535 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:04 crc kubenswrapper[4727]: E0109 11:39:04.870712 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:18 crc kubenswrapper[4727]: I0109 11:39:18.861255 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:18 crc kubenswrapper[4727]: E0109 11:39:18.862470 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.814007 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815469 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-content" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815487 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-content" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815551 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-utilities" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815561 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-utilities" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815576 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815583 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815878 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.817996 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.830046 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.860824 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.861149 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.953991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.954191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.954235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056567 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.057182 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.057309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.080395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.141211 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.679136 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577200 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" exitCode=0 Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb"} Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577780 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"749709203c591e99d8095d66f17f59fc07f318ade2c0664182dbb247fc67d2b6"} Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.580881 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:39:37 crc kubenswrapper[4727]: I0109 11:39:37.616971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} Jan 09 11:39:40 crc kubenswrapper[4727]: I0109 11:39:40.218162 4727 scope.go:117] "RemoveContainer" containerID="5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" Jan 09 11:39:40 crc kubenswrapper[4727]: I0109 11:39:40.247896 4727 scope.go:117] "RemoveContainer" containerID="facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e" Jan 09 11:39:41 crc kubenswrapper[4727]: I0109 11:39:41.658378 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" exitCode=0 Jan 09 11:39:41 crc kubenswrapper[4727]: I0109 11:39:41.658470 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.702158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.728440 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8jd5" podStartSLOduration=3.735263495 podStartE2EDuration="12.728412519s" podCreationTimestamp="2026-01-09 11:39:32 +0000 UTC" firstStartedPulling="2026-01-09 11:39:34.580466881 +0000 UTC m=+3220.030371672" lastFinishedPulling="2026-01-09 11:39:43.573615915 +0000 UTC m=+3229.023520696" observedRunningTime="2026-01-09 11:39:44.721433539 +0000 UTC m=+3230.171338330" watchObservedRunningTime="2026-01-09 11:39:44.728412519 +0000 UTC m=+3230.178317310" Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.866938 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:44 crc kubenswrapper[4727]: E0109 11:39:44.867254 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.141559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.143108 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.208845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.845270 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.905412 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:55 crc kubenswrapper[4727]: I0109 11:39:55.820829 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q8jd5" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" containerID="cri-o://1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" gracePeriod=2 Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.366240 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.459618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.459686 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities" (OuterVolumeSpecName: "utilities") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460778 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.469089 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s" (OuterVolumeSpecName: "kube-api-access-m699s") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "kube-api-access-m699s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.562721 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.597568 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.666167 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835747 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" exitCode=0 Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835870 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.836307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"749709203c591e99d8095d66f17f59fc07f318ade2c0664182dbb247fc67d2b6"} Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835887 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.836344 4727 scope.go:117] "RemoveContainer" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.863012 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.863429 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.870807 4727 scope.go:117] "RemoveContainer" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.895619 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.896863 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.923872 4727 scope.go:117] "RemoveContainer" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.954484 4727 scope.go:117] "RemoveContainer" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.955175 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": container with ID starting with 1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b not found: ID does not exist" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.955229 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} err="failed to get container status \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": rpc error: code = NotFound desc = could not find container \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": container with ID starting with 1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b not found: ID does not exist" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.955268 4727 scope.go:117] "RemoveContainer" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.956134 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": container with ID starting with d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1 not found: ID does not exist" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956173 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} err="failed to get container status \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": rpc error: code = NotFound desc = could not find container \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": container with ID starting with d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1 not found: ID does not exist" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956196 4727 scope.go:117] "RemoveContainer" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.956563 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": container with ID starting with 58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb not found: ID does not exist" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956592 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb"} err="failed to get container status \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": rpc error: code = NotFound desc = could not find container \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": container with ID starting with 58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb not found: ID does not exist" Jan 09 11:39:58 crc kubenswrapper[4727]: I0109 11:39:58.874194 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" path="/var/lib/kubelet/pods/42513cc8-0316-49f1-8062-74a805d1e27b/volumes" Jan 09 11:40:10 crc kubenswrapper[4727]: I0109 11:40:10.861577 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:10 crc kubenswrapper[4727]: E0109 11:40:10.862543 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:40:25 crc kubenswrapper[4727]: I0109 11:40:25.860131 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:25 crc kubenswrapper[4727]: E0109 11:40:25.860810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:40:40 crc kubenswrapper[4727]: I0109 11:40:40.860463 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:41 crc kubenswrapper[4727]: I0109 11:40:41.318220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.594029 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595474 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-content" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595498 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-content" Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595546 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-utilities" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595560 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-utilities" Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595590 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595602 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.596076 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.598749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.618896 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.691892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.692065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.692186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.817799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.922905 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:43 crc kubenswrapper[4727]: I0109 11:40:43.509841 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353183 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" exitCode=0 Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753"} Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerStarted","Data":"22e92d8c401319bdf05053675c6ee0f39db482299f3df225cbcda4d4a2d66309"} Jan 09 11:40:46 crc kubenswrapper[4727]: I0109 11:40:46.376372 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" exitCode=0 Jan 09 11:40:46 crc kubenswrapper[4727]: I0109 11:40:46.376478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b"} Jan 09 11:40:47 crc kubenswrapper[4727]: I0109 11:40:47.390287 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerStarted","Data":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} Jan 09 11:40:47 crc kubenswrapper[4727]: I0109 11:40:47.411791 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npbxg" podStartSLOduration=2.947334593 podStartE2EDuration="5.411761955s" podCreationTimestamp="2026-01-09 11:40:42 +0000 UTC" firstStartedPulling="2026-01-09 11:40:44.356220763 +0000 UTC m=+3289.806125534" lastFinishedPulling="2026-01-09 11:40:46.820648115 +0000 UTC m=+3292.270552896" observedRunningTime="2026-01-09 11:40:47.411298821 +0000 UTC m=+3292.861203612" watchObservedRunningTime="2026-01-09 11:40:47.411761955 +0000 UTC m=+3292.861666746" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.923792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.924401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.992978 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:53 crc kubenswrapper[4727]: I0109 11:40:53.499346 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:53 crc kubenswrapper[4727]: I0109 11:40:53.560250 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:55 crc kubenswrapper[4727]: I0109 11:40:55.463349 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npbxg" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" containerID="cri-o://66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" gracePeriod=2 Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.098413 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132386 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.146793 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm" (OuterVolumeSpecName: "kube-api-access-29grm") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "kube-api-access-29grm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.150722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities" (OuterVolumeSpecName: "utilities") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.235643 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.236467 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.245147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.339927 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480691 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" exitCode=0 Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"22e92d8c401319bdf05053675c6ee0f39db482299f3df225cbcda4d4a2d66309"} Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480935 4727 scope.go:117] "RemoveContainer" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.482206 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.514293 4727 scope.go:117] "RemoveContainer" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.526985 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.537761 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.559023 4727 scope.go:117] "RemoveContainer" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.617605 4727 scope.go:117] "RemoveContainer" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.622436 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": container with ID starting with 66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc not found: ID does not exist" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.622570 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} err="failed to get container status \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": rpc error: code = NotFound desc = could not find container \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": container with ID starting with 66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.622874 4727 scope.go:117] "RemoveContainer" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.624295 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": container with ID starting with e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b not found: ID does not exist" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.624383 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b"} err="failed to get container status \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": rpc error: code = NotFound desc = could not find container \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": container with ID starting with e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.624455 4727 scope.go:117] "RemoveContainer" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.625496 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": container with ID starting with fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753 not found: ID does not exist" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.625560 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753"} err="failed to get container status \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": rpc error: code = NotFound desc = could not find container \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": container with ID starting with fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753 not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.872647 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" path="/var/lib/kubelet/pods/e1aa2ef9-2c42-46c6-ae66-42148ff8722d/volumes" Jan 09 11:43:09 crc kubenswrapper[4727]: I0109 11:43:09.404886 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:43:09 crc kubenswrapper[4727]: I0109 11:43:09.405675 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.545745 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547218 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-content" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547240 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-content" Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-utilities" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547293 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-utilities" Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547324 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547334 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547681 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.550893 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.569878 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.662249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.662378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.663243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.765952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.804663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.880738 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:34 crc kubenswrapper[4727]: I0109 11:43:34.254061 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.108826 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689" exitCode=0 Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.108940 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689"} Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.109378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerStarted","Data":"cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17"} Jan 09 11:43:37 crc kubenswrapper[4727]: I0109 11:43:37.134146 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b" exitCode=0 Jan 09 11:43:37 crc kubenswrapper[4727]: I0109 11:43:37.134234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b"} Jan 09 11:43:38 crc kubenswrapper[4727]: I0109 11:43:38.151724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerStarted","Data":"2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4"} Jan 09 11:43:38 crc kubenswrapper[4727]: I0109 11:43:38.177041 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7t5l" podStartSLOduration=2.636370005 podStartE2EDuration="5.17702111s" podCreationTimestamp="2026-01-09 11:43:33 +0000 UTC" firstStartedPulling="2026-01-09 11:43:35.112302885 +0000 UTC m=+3460.562207666" lastFinishedPulling="2026-01-09 11:43:37.65295399 +0000 UTC m=+3463.102858771" observedRunningTime="2026-01-09 11:43:38.173085754 +0000 UTC m=+3463.622990535" watchObservedRunningTime="2026-01-09 11:43:38.17702111 +0000 UTC m=+3463.626925891" Jan 09 11:43:39 crc kubenswrapper[4727]: I0109 11:43:39.405123 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:43:39 crc kubenswrapper[4727]: I0109 11:43:39.405690 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.881059 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.881932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.951834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:44 crc kubenswrapper[4727]: I0109 11:43:44.266813 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:44 crc kubenswrapper[4727]: I0109 11:43:44.336467 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:46 crc kubenswrapper[4727]: I0109 11:43:46.236204 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7t5l" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" containerID="cri-o://2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" gracePeriod=2 Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250432 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" exitCode=0 Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4"} Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250829 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17"} Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250857 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.347112 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485015 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485388 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.486907 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities" (OuterVolumeSpecName: "utilities") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.505835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn" (OuterVolumeSpecName: "kube-api-access-bvkvn") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "kube-api-access-bvkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.546875 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588207 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588275 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588291 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.260182 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.301170 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.312974 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.873735 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" path="/var/lib/kubelet/pods/e9eff72c-b10f-4813-a088-89b8f592276a/volumes" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.404990 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.405817 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.405908 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.407260 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.407333 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" gracePeriod=600 Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493121 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" exitCode=0 Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493747 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.548184 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550141 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550161 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550185 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-content" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550213 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-content" Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550246 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-utilities" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550255 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-utilities" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.552033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.554378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.558061 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.713858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.714387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.714614 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.818601 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.818730 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.841543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.924759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:18 crc kubenswrapper[4727]: I0109 11:44:18.459904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:18 crc kubenswrapper[4727]: I0109 11:44:18.572034 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerStarted","Data":"09a9b5a0dc6b9424c9cfba3511d9e49daf69aca584d57a4611284068297cad26"} Jan 09 11:44:19 crc kubenswrapper[4727]: I0109 11:44:19.584008 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" exitCode=0 Jan 09 11:44:19 crc kubenswrapper[4727]: I0109 11:44:19.584057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5"} Jan 09 11:44:21 crc kubenswrapper[4727]: I0109 11:44:21.605601 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" exitCode=0 Jan 09 11:44:21 crc kubenswrapper[4727]: I0109 11:44:21.605686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5"} Jan 09 11:44:22 crc kubenswrapper[4727]: I0109 11:44:22.617718 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerStarted","Data":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} Jan 09 11:44:22 crc kubenswrapper[4727]: I0109 11:44:22.648092 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdnfg" podStartSLOduration=2.960647557 podStartE2EDuration="5.648064832s" podCreationTimestamp="2026-01-09 11:44:17 +0000 UTC" firstStartedPulling="2026-01-09 11:44:19.586461532 +0000 UTC m=+3505.036366313" lastFinishedPulling="2026-01-09 11:44:22.273878807 +0000 UTC m=+3507.723783588" observedRunningTime="2026-01-09 11:44:22.640361193 +0000 UTC m=+3508.090265994" watchObservedRunningTime="2026-01-09 11:44:22.648064832 +0000 UTC m=+3508.097969613" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.925692 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.926653 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.972670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:28 crc kubenswrapper[4727]: I0109 11:44:28.716324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:28 crc kubenswrapper[4727]: I0109 11:44:28.777606 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:30 crc kubenswrapper[4727]: I0109 11:44:30.690905 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdnfg" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" containerID="cri-o://823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" gracePeriod=2 Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.160812 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232681 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.233452 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities" (OuterVolumeSpecName: "utilities") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.239838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h" (OuterVolumeSpecName: "kube-api-access-nvf4h") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "kube-api-access-nvf4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.335481 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.335547 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704275 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" exitCode=0 Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"09a9b5a0dc6b9424c9cfba3511d9e49daf69aca584d57a4611284068297cad26"} Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704415 4727 scope.go:117] "RemoveContainer" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704595 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.721071 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.729066 4727 scope.go:117] "RemoveContainer" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.745219 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.755974 4727 scope.go:117] "RemoveContainer" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815263 4727 scope.go:117] "RemoveContainer" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.815760 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": container with ID starting with 823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc not found: ID does not exist" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815794 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} err="failed to get container status \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": rpc error: code = NotFound desc = could not find container \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": container with ID starting with 823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc not found: ID does not exist" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815822 4727 scope.go:117] "RemoveContainer" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.816169 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": container with ID starting with 36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5 not found: ID does not exist" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816243 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5"} err="failed to get container status \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": rpc error: code = NotFound desc = could not find container \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": container with ID starting with 36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5 not found: ID does not exist" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816281 4727 scope.go:117] "RemoveContainer" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.816788 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": container with ID starting with 42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5 not found: ID does not exist" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816820 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5"} err="failed to get container status \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": rpc error: code = NotFound desc = could not find container \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": container with ID starting with 42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5 not found: ID does not exist" Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.045268 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.056466 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.872351 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" path="/var/lib/kubelet/pods/f79b54d3-f079-42de-b8bf-baab3dc5e17d/volumes" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.147974 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149357 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149375 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-utilities" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149407 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-utilities" Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149424 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-content" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149430 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-content" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149691 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.150549 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.211973 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.212196 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.221579 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315060 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315252 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.418054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.419010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.419053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.420845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.431236 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.438840 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.536568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.995821 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:01 crc kubenswrapper[4727]: I0109 11:45:01.021181 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerStarted","Data":"c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7"} Jan 09 11:45:02 crc kubenswrapper[4727]: I0109 11:45:02.034461 4727 generic.go:334] "Generic (PLEG): container finished" podID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerID="4ebec5e3c30b190966b50316c5cc72ed18855f1c4ae9afca377ae1063871ec25" exitCode=0 Jan 09 11:45:02 crc kubenswrapper[4727]: I0109 11:45:02.034562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerDied","Data":"4ebec5e3c30b190966b50316c5cc72ed18855f1c4ae9afca377ae1063871ec25"} Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.474027 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592600 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.593613 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.599647 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.599746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt" (OuterVolumeSpecName: "kube-api-access-fwsnt") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "kube-api-access-fwsnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712472 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712553 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712605 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerDied","Data":"c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7"} Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062388 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062426 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.583015 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.594189 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.872989 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" path="/var/lib/kubelet/pods/f4efe522-b8d6-44a6-a75b-7cb19f528323/volumes" Jan 09 11:45:40 crc kubenswrapper[4727]: I0109 11:45:40.515778 4727 scope.go:117] "RemoveContainer" containerID="b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007" Jan 09 11:46:08 crc kubenswrapper[4727]: I0109 11:46:08.738826 4727 generic.go:334] "Generic (PLEG): container finished" podID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerID="6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc" exitCode=0 Jan 09 11:46:08 crc kubenswrapper[4727]: I0109 11:46:08.739004 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerDied","Data":"6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc"} Jan 09 11:46:09 crc kubenswrapper[4727]: I0109 11:46:09.405906 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:46:09 crc kubenswrapper[4727]: I0109 11:46:09.406007 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.157065 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164580 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164753 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164810 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.169915 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.170354 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data" (OuterVolumeSpecName: "config-data") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.173067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.174126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.178719 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz" (OuterVolumeSpecName: "kube-api-access-dqnbz") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "kube-api-access-dqnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.206438 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.206942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.226727 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.242899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.266762 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267036 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267107 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267208 4727 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267270 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267344 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267416 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267535 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.271419 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.294696 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.374321 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762760 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerDied","Data":"8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267"} Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762814 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762884 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.634703 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:17 crc kubenswrapper[4727]: E0109 11:46:17.637446 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.637569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: E0109 11:46:17.637665 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.637770 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.638156 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.638274 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.639373 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.642730 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghr4t" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.645918 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.755829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.756064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.857656 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.857772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.859078 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.879166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.885869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.975529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.466996 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.469466 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.840276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"65b47f8e-eab5-4015-9926-36dcf8a8a1f0","Type":"ContainerStarted","Data":"98bc671b77252cccbb3e0727f05231734034df5290b99678df9ec8fcc0b01513"} Jan 09 11:46:19 crc kubenswrapper[4727]: I0109 11:46:19.850368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"65b47f8e-eab5-4015-9926-36dcf8a8a1f0","Type":"ContainerStarted","Data":"7a436021427d9eeed6efd181ef88b40f28ff82051106766fa35113772a806afe"} Jan 09 11:46:19 crc kubenswrapper[4727]: I0109 11:46:19.868023 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.772279412 podStartE2EDuration="2.867999893s" podCreationTimestamp="2026-01-09 11:46:17 +0000 UTC" firstStartedPulling="2026-01-09 11:46:18.469239777 +0000 UTC m=+3623.919144558" lastFinishedPulling="2026-01-09 11:46:19.564960258 +0000 UTC m=+3625.014865039" observedRunningTime="2026-01-09 11:46:19.861925228 +0000 UTC m=+3625.311830019" watchObservedRunningTime="2026-01-09 11:46:19.867999893 +0000 UTC m=+3625.317904694" Jan 09 11:46:39 crc kubenswrapper[4727]: I0109 11:46:39.405354 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:46:39 crc kubenswrapper[4727]: I0109 11:46:39.406329 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.685241 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.689313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693001 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dwztv"/"openshift-service-ca.crt" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dwztv"/"default-dockercfg-bl4mm" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693468 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dwztv"/"kube-root-ca.crt" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.725605 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.745574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.745659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.869873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:43 crc kubenswrapper[4727]: I0109 11:46:43.016577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:43 crc kubenswrapper[4727]: I0109 11:46:43.519918 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:44 crc kubenswrapper[4727]: I0109 11:46:44.127609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"71abb5ece9245a9429a415aa2a433943d1c05337ce26e0114c0f545b54ef0723"} Jan 09 11:46:51 crc kubenswrapper[4727]: I0109 11:46:51.208828 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} Jan 09 11:46:52 crc kubenswrapper[4727]: I0109 11:46:52.219733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676"} Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.002725 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dwztv/must-gather-hnbtv" podStartSLOduration=5.752159253 podStartE2EDuration="13.002697359s" podCreationTimestamp="2026-01-09 11:46:42 +0000 UTC" firstStartedPulling="2026-01-09 11:46:43.529848298 +0000 UTC m=+3648.979753079" lastFinishedPulling="2026-01-09 11:46:50.780386364 +0000 UTC m=+3656.230291185" observedRunningTime="2026-01-09 11:46:52.243856108 +0000 UTC m=+3657.693760889" watchObservedRunningTime="2026-01-09 11:46:55.002697359 +0000 UTC m=+3660.452602150" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.013278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.015197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.066694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.066785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.168879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.168975 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.169392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.212266 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.348140 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: W0109 11:46:55.394453 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc971b42_d720_4b3e_8686_02a203f4c925.slice/crio-a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55 WatchSource:0}: Error finding container a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55: Status 404 returned error can't find the container with id a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55 Jan 09 11:46:56 crc kubenswrapper[4727]: I0109 11:46:56.255360 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerStarted","Data":"a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55"} Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.394912 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerStarted","Data":"a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c"} Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405566 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405641 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405701 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.406731 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.406799 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" gracePeriod=600 Jan 09 11:47:10 crc kubenswrapper[4727]: E0109 11:47:10.106960 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.407621 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" exitCode=0 Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409189 4727 scope.go:117] "RemoveContainer" containerID="1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409633 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:10 crc kubenswrapper[4727]: E0109 11:47:10.409923 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.433389 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" podStartSLOduration=3.042423175 podStartE2EDuration="16.433368165s" podCreationTimestamp="2026-01-09 11:46:54 +0000 UTC" firstStartedPulling="2026-01-09 11:46:55.397386386 +0000 UTC m=+3660.847291167" lastFinishedPulling="2026-01-09 11:47:08.788331376 +0000 UTC m=+3674.238236157" observedRunningTime="2026-01-09 11:47:10.426788387 +0000 UTC m=+3675.876693168" watchObservedRunningTime="2026-01-09 11:47:10.433368165 +0000 UTC m=+3675.883272946" Jan 09 11:47:23 crc kubenswrapper[4727]: I0109 11:47:23.861101 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:23 crc kubenswrapper[4727]: E0109 11:47:23.862301 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:38 crc kubenswrapper[4727]: I0109 11:47:38.861889 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:38 crc kubenswrapper[4727]: E0109 11:47:38.862723 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:49 crc kubenswrapper[4727]: I0109 11:47:49.870115 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:49 crc kubenswrapper[4727]: E0109 11:47:49.871446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:00 crc kubenswrapper[4727]: I0109 11:48:00.861292 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:00 crc kubenswrapper[4727]: E0109 11:48:00.862307 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:05 crc kubenswrapper[4727]: I0109 11:48:05.022728 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc971b42-d720-4b3e-8686-02a203f4c925" containerID="a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c" exitCode=0 Jan 09 11:48:05 crc kubenswrapper[4727]: I0109 11:48:05.023183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerDied","Data":"a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c"} Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.138743 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.184861 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.197229 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.326846 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"bc971b42-d720-4b3e-8686-02a203f4c925\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.327179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"bc971b42-d720-4b3e-8686-02a203f4c925\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.327891 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host" (OuterVolumeSpecName: "host") pod "bc971b42-d720-4b3e-8686-02a203f4c925" (UID: "bc971b42-d720-4b3e-8686-02a203f4c925"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.356250 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62" (OuterVolumeSpecName: "kube-api-access-pnq62") pod "bc971b42-d720-4b3e-8686-02a203f4c925" (UID: "bc971b42-d720-4b3e-8686-02a203f4c925"). InnerVolumeSpecName "kube-api-access-pnq62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.430760 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.430814 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.876816 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" path="/var/lib/kubelet/pods/bc971b42-d720-4b3e-8686-02a203f4c925/volumes" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.051317 4727 scope.go:117] "RemoveContainer" containerID="a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.051378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.372955 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:07 crc kubenswrapper[4727]: E0109 11:48:07.374698 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.374798 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.375262 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.376253 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.555365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.555612 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.657905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.658029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.658167 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.690126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.695860 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: W0109 11:48:07.731923 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68fefaa2_2054_4a47_afbd_3ef34e97798b.slice/crio-5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f WatchSource:0}: Error finding container 5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f: Status 404 returned error can't find the container with id 5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f Jan 09 11:48:08 crc kubenswrapper[4727]: I0109 11:48:08.065938 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" event={"ID":"68fefaa2-2054-4a47-afbd-3ef34e97798b","Type":"ContainerStarted","Data":"5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f"} Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.077470 4727 generic.go:334] "Generic (PLEG): container finished" podID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerID="5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408" exitCode=0 Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.077556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" event={"ID":"68fefaa2-2054-4a47-afbd-3ef34e97798b","Type":"ContainerDied","Data":"5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408"} Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.654337 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.666519 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.230924 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"68fefaa2-2054-4a47-afbd-3ef34e97798b\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422894 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"68fefaa2-2054-4a47-afbd-3ef34e97798b\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host" (OuterVolumeSpecName: "host") pod "68fefaa2-2054-4a47-afbd-3ef34e97798b" (UID: "68fefaa2-2054-4a47-afbd-3ef34e97798b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.423783 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.429885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b" (OuterVolumeSpecName: "kube-api-access-b966b") pod "68fefaa2-2054-4a47-afbd-3ef34e97798b" (UID: "68fefaa2-2054-4a47-afbd-3ef34e97798b"). InnerVolumeSpecName "kube-api-access-b966b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.526018 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.846252 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:10 crc kubenswrapper[4727]: E0109 11:48:10.846866 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.846896 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.847124 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.848143 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.878434 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" path="/var/lib/kubelet/pods/68fefaa2-2054-4a47-afbd-3ef34e97798b/volumes" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.038237 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.038324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.102576 4727 scope.go:117] "RemoveContainer" containerID="5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.102656 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.141612 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.142225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.142018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.162350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.169408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: W0109 11:48:11.210277 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod981355da_ce46_4790_9eea_9af34f7cc603.slice/crio-7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623 WatchSource:0}: Error finding container 7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623: Status 404 returned error can't find the container with id 7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623 Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118687 4727 generic.go:334] "Generic (PLEG): container finished" podID="981355da-ce46-4790-9eea-9af34f7cc603" containerID="9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81" exitCode=0 Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118749 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" event={"ID":"981355da-ce46-4790-9eea-9af34f7cc603","Type":"ContainerDied","Data":"9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81"} Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" event={"ID":"981355da-ce46-4790-9eea-9af34f7cc603","Type":"ContainerStarted","Data":"7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623"} Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.163965 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.176402 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:13 crc kubenswrapper[4727]: I0109 11:48:13.875203 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.059773 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"981355da-ce46-4790-9eea-9af34f7cc603\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"981355da-ce46-4790-9eea-9af34f7cc603\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host" (OuterVolumeSpecName: "host") pod "981355da-ce46-4790-9eea-9af34f7cc603" (UID: "981355da-ce46-4790-9eea-9af34f7cc603"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060765 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.069874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6" (OuterVolumeSpecName: "kube-api-access-gdpp6") pod "981355da-ce46-4790-9eea-9af34f7cc603" (UID: "981355da-ce46-4790-9eea-9af34f7cc603"). InnerVolumeSpecName "kube-api-access-gdpp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.163002 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.771374 4727 scope.go:117] "RemoveContainer" containerID="9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.771640 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.867169 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:14 crc kubenswrapper[4727]: E0109 11:48:14.867926 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.887423 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981355da-ce46-4790-9eea-9af34f7cc603" path="/var/lib/kubelet/pods/981355da-ce46-4790-9eea-9af34f7cc603/volumes" Jan 09 11:48:26 crc kubenswrapper[4727]: I0109 11:48:26.862615 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:26 crc kubenswrapper[4727]: E0109 11:48:26.863545 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.622104 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.853072 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api-log/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.883994 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.922182 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.114745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.122583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.366757 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-central-agent/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.406770 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc_23e25abc-b16a-4273-846e-7fab7ef1a095/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.462245 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-notification-agent/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.586385 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/proxy-httpd/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.626486 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/sg-core/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.715979 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.806393 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.929465 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/cinder-scheduler/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.995879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/probe/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.173496 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x2djn_f1169cca-13ce-4a18-8901-faa73fc5b913/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.251001 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l88s_fc6114d6-7052-46b3-a8e5-c8b9731cc92c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.420096 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.620017 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.686816 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/dnsmasq-dns/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.890689 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz_79cfc519-9725-4957-b42c-d262651895a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.945947 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-log/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.985242 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-httpd/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.435140 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-log/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.438437 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-httpd/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.695059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.850820 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qplw9_a4f9d22c-83b0-4c0c-95e3-a2b2937908db/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.986766 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon-log/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.010472 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qs4rr_e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.351483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3/kube-state-metrics/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.401954 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-666857844b-c2hp6_3738e7aa-d182-43a0-962c-b735526851f2/keystone-api/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.540059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zs24v_a56270d2-f80b-4dda-a64c-fe39d4b4a9e5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.814089 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-api/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.918899 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-httpd/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.970905 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82_92bbfcf1-befd-42df-a532-97f9a3bd22d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.496136 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-log/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.574795 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3aab78e7-6f64-4c9e-bb37-f670092f06eb/nova-cell0-conductor-conductor/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.759903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-api/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.826167 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a601271-3d79-4446-bc6f-81b4490541f4/nova-cell1-conductor-conductor/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.991535 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7275705c-d408-4eb4-af28-b9b51403b913/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.234657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s9spc_291b6783-3c71-4449-b696-27c7c340c41a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.355066 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-log/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.713613 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1203f055-468b-48e1-b859-78a4d11d5034/nova-scheduler-scheduler/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.784930 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.930799 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.976273 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/galera/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.157149 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.370657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.460048 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/galera/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.611351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06c8d5e8-c424-4b08-98a2-8e89fa5a27b4/openstackclient/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.687744 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p58fw_ede60be2-7d1e-482a-b994-6c552d322575/openstack-network-exporter/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.792749 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-metadata/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.966569 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwrp2_d81594ff-04f5-47c2-9620-db583609e9aa/ovn-controller/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.056547 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.331322 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.343497 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovs-vswitchd/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.374589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.584874 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/openstack-network-exporter/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.636482 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rhzcm_5ebde73e-573e-4b52-b779-dd3cd03761e0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.758872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/ovn-northd/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.869807 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/ovsdbserver-nb/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.964158 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/openstack-network-exporter/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.075372 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/openstack-network-exporter/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.115823 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/ovsdbserver-sb/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.815872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-api/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.840262 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-log/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.028413 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.277478 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.299157 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/rabbitmq/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.385200 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.558866 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.765269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd_72a53995-d5d0-4795-a1c7-f8a570a0ff6a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.771582 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/rabbitmq/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.942568 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4zggm_ce764242-0f23-4580-87ee-9f0f2f81fb0e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.053525 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv_d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.212781 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-27qwg_6f717d58-9e42-4359-89e8-70a60345d546/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.341129 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9n6wb_247ff33e-a764-4e75-9d54-2c45ae8d8ca7/ssh-known-hosts-edpm-deployment/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.859792 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:40 crc kubenswrapper[4727]: E0109 11:48:40.860193 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.971394 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.076380 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-t2qwp_5a7df215-53c5-4771-95de-9af59255b3de/swift-ring-rebalance/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.095703 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-httpd/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.264058 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.340330 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-reaper/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.376273 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.456466 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.501861 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.660592 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-updater/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.691819 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.692709 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.741270 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.891269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-expirer/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.923594 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.976324 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-server/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.072009 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-updater/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.124671 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/rsync/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.238897 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/swift-recon-cron/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.440787 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5_2d4033a7-e7a4-495b-bbb9-63e8ae1189bc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.541648 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e/tempest-tests-tempest-tests-runner/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.755372 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_65b47f8e-eab5-4015-9926-36dcf8a8a1f0/test-operator-logs-container/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.833189 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m4njz_6811cbf2-94eb-44a0-ae3e-8f0e35163df5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:51 crc kubenswrapper[4727]: I0109 11:48:51.015218 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0e6e8606-58f3-4640-939b-afa25ce1ce03/memcached/0.log" Jan 09 11:48:51 crc kubenswrapper[4727]: I0109 11:48:51.861490 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:51 crc kubenswrapper[4727]: E0109 11:48:51.861800 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:05 crc kubenswrapper[4727]: I0109 11:49:05.860663 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:05 crc kubenswrapper[4727]: E0109 11:49:05.861797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.651910 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-nd7lx_f57a8b19-1f94-4cc4-af28-f7c506f93de5/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.756122 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-l25ck_63639485-2ddb-4983-921a-9de5dda98f0f/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.861341 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-l4fld_e8c91cda-4264-401f-83de-20ddcf5f0d4d/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.957455 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.144626 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.170170 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.185371 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.337447 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.337830 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.413043 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/extract/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.579460 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-s49vr_9891b17e-81f9-4999-b489-db3e162c2a54/manager/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.598428 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-w5c7d_9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6/manager/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.840800 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-nxc7n_51db22df-3d25-4c12-b104-eb3848940958/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.064989 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g5ckd_e4480343-1920-4926-8668-e47e5bbfb646/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.083906 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-qpmcd_24886819-7c1f-4b1f-880e-4b2102e302c1/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.297792 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-4nzmw_6040cced-684e-4521-9c4e-1debba9d5320/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.311887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-6gtz5_ddfee9e4-1084-4750-ab19-473dde7a2fb6/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.575258 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-4dv6h_e604d4a1-bf95-49df-a854-b15337b7fae7/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.583305 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-q8wx7_848b9588-10d2-4bd4-bcc0-cccd55334c85/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.780936 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-pnk72_fab7e320-c116-4603-9aac-2e310be1b209/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.800734 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-69kx5_9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.954429 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh_3550e1cd-642e-481c-b98f-b6d3770f51ca/manager/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.355000 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cj5kr_26bfbd30-40a2-466a-862d-6cdf25911f85/registry-server/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.556823 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-75c59d454f-d829c_f749f148-ae4b-475b-90d9-1028d134d57c/operator/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.709040 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gkkm4_558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6/manager/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.856187 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-cc8k9_15c1d49b-c086-4c30-9a99-e0fb597dd76f/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.174156 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2m6mz_ee5399a2-4352-4013-9c26-a40e4bc815e3/operator/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.265871 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-vgcgj_ba0be6cc-1e31-4421-aa33-1e2514069376/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.569598 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-m8s9d_e3f94965-fce3-4e35-9f97-5047e05dd50a/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.574724 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-x4r9z_c371fa9c-dd02-4673-99aa-4ec8fa8d9e07/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.710286 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7db9fd4464-5h9ft_6a33b307-e521-43c4-8e35-3e9d7d553716/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.811406 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-jvkn5_9300f2a9-97a8-4868-9485-8dd5d51df39e/manager/0.log" Jan 09 11:49:18 crc kubenswrapper[4727]: I0109 11:49:18.860470 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:18 crc kubenswrapper[4727]: E0109 11:49:18.861631 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.746171 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6pvx_879d1222-addb-406a-b8fd-3ce4068c1d08/control-plane-machine-set-operator/0.log" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.836086 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/machine-api-operator/0.log" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.861256 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:33 crc kubenswrapper[4727]: E0109 11:49:33.861664 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.887415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/kube-rbac-proxy/0.log" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.701447 4727 scope.go:117] "RemoveContainer" containerID="a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.727197 4727 scope.go:117] "RemoveContainer" containerID="2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.780888 4727 scope.go:117] "RemoveContainer" containerID="057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b" Jan 09 11:49:45 crc kubenswrapper[4727]: I0109 11:49:45.860312 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:45 crc kubenswrapper[4727]: E0109 11:49:45.861158 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.737152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2qqks_2715d39f-d488-448b-b6f2-ff592dea195a/cert-manager-controller/0.log" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.890955 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cbsgr_3a45eda8-4151-4b6c-b0f2-ab6416dc34e9/cert-manager-cainjector/0.log" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.981681 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlfjg_5cee0bf6-27dd-4944-bbef-574afbae1542/cert-manager-webhook/0.log" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.854248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:51 crc kubenswrapper[4727]: E0109 11:49:51.869368 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.869444 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.871106 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.879062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.894429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.926923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.927036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.927110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.032127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.032220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.055273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.211716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.734674 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.753815 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" exitCode=0 Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.753933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1"} Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.754372 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerStarted","Data":"43fa514c238a858568bd3ffb421a7fbd82c4411a3475d73032ca455b3c2d1d6c"} Jan 09 11:49:55 crc kubenswrapper[4727]: I0109 11:49:55.787705 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" exitCode=0 Jan 09 11:49:55 crc kubenswrapper[4727]: I0109 11:49:55.787841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23"} Jan 09 11:49:57 crc kubenswrapper[4727]: I0109 11:49:57.814686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerStarted","Data":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} Jan 09 11:49:57 crc kubenswrapper[4727]: I0109 11:49:57.843695 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tqmcx" podStartSLOduration=4.331636172 podStartE2EDuration="6.843671684s" podCreationTimestamp="2026-01-09 11:49:51 +0000 UTC" firstStartedPulling="2026-01-09 11:49:53.755961887 +0000 UTC m=+3839.205866668" lastFinishedPulling="2026-01-09 11:49:56.267997399 +0000 UTC m=+3841.717902180" observedRunningTime="2026-01-09 11:49:57.836590341 +0000 UTC m=+3843.286495132" watchObservedRunningTime="2026-01-09 11:49:57.843671684 +0000 UTC m=+3843.293576465" Jan 09 11:50:00 crc kubenswrapper[4727]: I0109 11:50:00.860357 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:00 crc kubenswrapper[4727]: E0109 11:50:00.861148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.202850 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-6dwzn_9721a7da-2c8a-4a0d-ac56-8b4b11c028cd/nmstate-console-plugin/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.541111 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4757d_673fefde-8c1b-46fe-a88a-00b3fa962a3e/nmstate-handler/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.588935 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/kube-rbac-proxy/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.634452 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/nmstate-metrics/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.822623 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p86wv_b4c7550e-1eaa-4e85-b44d-c752f6e37955/nmstate-operator/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.956027 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-5lc88_7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac/nmstate-webhook/0.log" Jan 09 11:50:02 crc kubenswrapper[4727]: I0109 11:50:02.213423 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:02 crc kubenswrapper[4727]: I0109 11:50:02.213995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:03 crc kubenswrapper[4727]: I0109 11:50:03.265063 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqmcx" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" probeResult="failure" output=< Jan 09 11:50:03 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:50:03 crc kubenswrapper[4727]: > Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.290919 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.397452 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.636648 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.008785 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tqmcx" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" containerID="cri-o://8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" gracePeriod=2 Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.530870 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.590602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.590776 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.591953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities" (OuterVolumeSpecName: "utilities") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.592095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.594315 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.600837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f" (OuterVolumeSpecName: "kube-api-access-jvk9f") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "kube-api-access-jvk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.696501 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.711357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.797829 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.867610 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:14 crc kubenswrapper[4727]: E0109 11:50:14.867959 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022720 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" exitCode=0 Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"43fa514c238a858568bd3ffb421a7fbd82c4411a3475d73032ca455b3c2d1d6c"} Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022856 4727 scope.go:117] "RemoveContainer" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022863 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.046640 4727 scope.go:117] "RemoveContainer" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.059036 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.070768 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.081196 4727 scope.go:117] "RemoveContainer" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123011 4727 scope.go:117] "RemoveContainer" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.123854 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": container with ID starting with 8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15 not found: ID does not exist" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123899 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} err="failed to get container status \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": rpc error: code = NotFound desc = could not find container \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": container with ID starting with 8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15 not found: ID does not exist" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123931 4727 scope.go:117] "RemoveContainer" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.124450 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": container with ID starting with 525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23 not found: ID does not exist" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.124558 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23"} err="failed to get container status \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": rpc error: code = NotFound desc = could not find container \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": container with ID starting with 525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23 not found: ID does not exist" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.124639 4727 scope.go:117] "RemoveContainer" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.124986 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": container with ID starting with 1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1 not found: ID does not exist" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.125019 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1"} err="failed to get container status \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": rpc error: code = NotFound desc = could not find container \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": container with ID starting with 1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1 not found: ID does not exist" Jan 09 11:50:16 crc kubenswrapper[4727]: I0109 11:50:16.874065 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" path="/var/lib/kubelet/pods/4dcc81d9-6a03-4adf-a867-77fbb1589a0e/volumes" Jan 09 11:50:27 crc kubenswrapper[4727]: I0109 11:50:27.861069 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:27 crc kubenswrapper[4727]: E0109 11:50:27.862234 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.386447 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/kube-rbac-proxy/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.588634 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/controller/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.679982 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-6msbv_ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee/frr-k8s-webhook-server/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.824967 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.011489 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.021235 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.061051 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.082895 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.272634 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.312363 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.316738 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.351369 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.543417 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.550483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.563643 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/controller/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.581570 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.783078 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.819155 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.874347 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy-frr/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.036234 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/reloader/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.109193 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fc8994bc9-qg228_d7eb33c1-26fc-47be-8c5b-f235afa77ea8/manager/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.396597 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c5db45976-lnrnz_d3f738e6-a0bc-42cd-b4d8-71940837e09f/webhook-server/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.590695 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/kube-rbac-proxy/0.log" Jan 09 11:50:34 crc kubenswrapper[4727]: I0109 11:50:34.226618 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/speaker/0.log" Jan 09 11:50:34 crc kubenswrapper[4727]: I0109 11:50:34.705255 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr/0.log" Jan 09 11:50:39 crc kubenswrapper[4727]: I0109 11:50:39.861085 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:39 crc kubenswrapper[4727]: E0109 11:50:39.862303 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.048053 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.436912 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.437045 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.457785 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.645926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.685192 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.752871 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/extract/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.877745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.092211 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.107431 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.170023 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.410220 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.467436 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/extract/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.727048 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.870692 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.113020 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.118646 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.161438 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.303070 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.383103 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.556878 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.749948 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/registry-server/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.858771 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.924300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.939962 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.132872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.140373 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.369898 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55prz_82b1f92b-6077-4b4c-876a-3d732a78b2cc/marketplace-operator/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.504318 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.678900 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/registry-server/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.803572 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.810651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.829379 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.027443 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.054502 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.242564 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/registry-server/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.257396 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.486089 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.492223 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.493488 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.687434 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.713547 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:53 crc kubenswrapper[4727]: I0109 11:50:53.218816 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/registry-server/0.log" Jan 09 11:50:53 crc kubenswrapper[4727]: I0109 11:50:53.860075 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:53 crc kubenswrapper[4727]: E0109 11:50:53.860519 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:06 crc kubenswrapper[4727]: I0109 11:51:06.861005 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:06 crc kubenswrapper[4727]: E0109 11:51:06.862565 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:19 crc kubenswrapper[4727]: I0109 11:51:19.861200 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:19 crc kubenswrapper[4727]: E0109 11:51:19.862293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.678420 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-utilities" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679746 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-utilities" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679755 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-content" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679763 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-content" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679776 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679781 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679982 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.681362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.700413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.844523 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.844999 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.845041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.860832 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.861118 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.970809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.008373 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.559048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.872568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerStarted","Data":"a01774070452956398e76f97df5be70efeff207ae0f8826425953d52ef7f9fb0"} Jan 09 11:51:35 crc kubenswrapper[4727]: E0109 11:51:35.134593 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2cd93f_bd92_49a9_9845_209da98d1ef1.slice/crio-cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.885122 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95" exitCode=0 Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.885247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95"} Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.887967 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:51:38 crc kubenswrapper[4727]: I0109 11:51:38.918703 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09" exitCode=0 Jan 09 11:51:38 crc kubenswrapper[4727]: I0109 11:51:38.919295 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09"} Jan 09 11:51:40 crc kubenswrapper[4727]: I0109 11:51:40.957652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerStarted","Data":"7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028"} Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.009522 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.011632 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.069025 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.093839 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgzzb" podStartSLOduration=7.352611904 podStartE2EDuration="11.093815713s" podCreationTimestamp="2026-01-09 11:51:33 +0000 UTC" firstStartedPulling="2026-01-09 11:51:35.887771518 +0000 UTC m=+3941.337676299" lastFinishedPulling="2026-01-09 11:51:39.628975327 +0000 UTC m=+3945.078880108" observedRunningTime="2026-01-09 11:51:40.986637176 +0000 UTC m=+3946.436541967" watchObservedRunningTime="2026-01-09 11:51:44.093815713 +0000 UTC m=+3949.543720494" Jan 09 11:51:48 crc kubenswrapper[4727]: I0109 11:51:48.865805 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:48 crc kubenswrapper[4727]: E0109 11:51:48.866930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.070810 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.131667 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.132574 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgzzb" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" containerID="cri-o://7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" gracePeriod=2 Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.098846 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" exitCode=0 Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.099353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028"} Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.291295 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.453340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities" (OuterVolumeSpecName: "utilities") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.460773 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g" (OuterVolumeSpecName: "kube-api-access-wwn9g") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "kube-api-access-wwn9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.504779 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560380 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560455 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560475 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"a01774070452956398e76f97df5be70efeff207ae0f8826425953d52ef7f9fb0"} Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112574 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112939 4727 scope.go:117] "RemoveContainer" containerID="7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.149907 4727 scope.go:117] "RemoveContainer" containerID="a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.155682 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.169842 4727 scope.go:117] "RemoveContainer" containerID="cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.171258 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.873673 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" path="/var/lib/kubelet/pods/cb2cd93f-bd92-49a9-9845-209da98d1ef1/volumes" Jan 09 11:52:01 crc kubenswrapper[4727]: I0109 11:52:01.860957 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:52:01 crc kubenswrapper[4727]: E0109 11:52:01.861607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:52:16 crc kubenswrapper[4727]: I0109 11:52:16.860843 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:52:17 crc kubenswrapper[4727]: I0109 11:52:17.326097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.785299 4727 generic.go:334] "Generic (PLEG): container finished" podID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" exitCode=0 Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.785429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerDied","Data":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.786921 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:03 crc kubenswrapper[4727]: I0109 11:53:03.462395 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/gather/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.219748 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.222947 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dwztv/must-gather-hnbtv" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" containerID="cri-o://c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" gracePeriod=2 Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.227575 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.654267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/copy/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.655282 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.782782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.782908 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.790027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq" (OuterVolumeSpecName: "kube-api-access-xh4rq") pod "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" (UID: "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5"). InnerVolumeSpecName "kube-api-access-xh4rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.885652 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") on node \"crc\" DevicePath \"\"" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.901788 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/copy/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.902893 4727 generic.go:334] "Generic (PLEG): container finished" podID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" exitCode=143 Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.902989 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.903040 4727 scope.go:117] "RemoveContainer" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.928523 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.941901 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" (UID: "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.988265 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.003330 4727 scope.go:117] "RemoveContainer" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:12 crc kubenswrapper[4727]: E0109 11:53:12.004378 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": container with ID starting with c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676 not found: ID does not exist" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004442 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676"} err="failed to get container status \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": rpc error: code = NotFound desc = could not find container \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": container with ID starting with c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676 not found: ID does not exist" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004518 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:12 crc kubenswrapper[4727]: E0109 11:53:12.004850 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": container with ID starting with 77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70 not found: ID does not exist" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004885 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} err="failed to get container status \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": rpc error: code = NotFound desc = could not find container \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": container with ID starting with 77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70 not found: ID does not exist" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.878531 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" path="/var/lib/kubelet/pods/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/volumes" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.326682 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328337 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328366 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328375 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328395 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-content" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-content" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328426 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-utilities" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328434 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-utilities" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328454 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328460 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328708 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328733 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328746 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.330599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.343228 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.520741 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.520951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.521025 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.623708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.623854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.656398 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.662924 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:55 crc kubenswrapper[4727]: I0109 11:53:55.028053 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:55 crc kubenswrapper[4727]: I0109 11:53:55.120076 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"0e5c2069e5a99786d3be5d2e49a3f70ad6be1c1f764fdaa1dbe03f74a36d829b"} Jan 09 11:53:56 crc kubenswrapper[4727]: I0109 11:53:56.133332 4727 generic.go:334] "Generic (PLEG): container finished" podID="26aacbc8-deff-4e22-931d-552244f5bfcc" containerID="2ffcaf6d5f244e62ba5b5943d33b9a20c1499d2655517d727b7bbc96d3ee9107" exitCode=0 Jan 09 11:53:56 crc kubenswrapper[4727]: I0109 11:53:56.133454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerDied","Data":"2ffcaf6d5f244e62ba5b5943d33b9a20c1499d2655517d727b7bbc96d3ee9107"} Jan 09 11:54:00 crc kubenswrapper[4727]: I0109 11:54:00.181022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025"} Jan 09 11:54:01 crc kubenswrapper[4727]: I0109 11:54:01.194152 4727 generic.go:334] "Generic (PLEG): container finished" podID="26aacbc8-deff-4e22-931d-552244f5bfcc" containerID="193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025" exitCode=0 Jan 09 11:54:01 crc kubenswrapper[4727]: I0109 11:54:01.194215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerDied","Data":"193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025"} Jan 09 11:54:02 crc kubenswrapper[4727]: I0109 11:54:02.210159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"9fdaf8f629f89ef6f8d288061c3b747b90fb871fa6476d3728fa5c8f90a3f81a"} Jan 09 11:54:02 crc kubenswrapper[4727]: I0109 11:54:02.254709 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tm96" podStartSLOduration=2.697009149 podStartE2EDuration="8.254656876s" podCreationTimestamp="2026-01-09 11:53:54 +0000 UTC" firstStartedPulling="2026-01-09 11:53:56.136345532 +0000 UTC m=+4081.586250313" lastFinishedPulling="2026-01-09 11:54:01.693993259 +0000 UTC m=+4087.143898040" observedRunningTime="2026-01-09 11:54:02.243486883 +0000 UTC m=+4087.693391754" watchObservedRunningTime="2026-01-09 11:54:02.254656876 +0000 UTC m=+4087.704561667" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.663275 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.663829 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.880001 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.709656 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.784124 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.841212 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.841555 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-962zg" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" containerID="cri-o://33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" gracePeriod=2 Jan 09 11:54:15 crc kubenswrapper[4727]: I0109 11:54:15.343461 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" exitCode=0 Jan 09 11:54:15 crc kubenswrapper[4727]: I0109 11:54:15.343540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd"} Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.216689 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.330981 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.331277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.331392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.332954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities" (OuterVolumeSpecName: "utilities") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.342643 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v" (OuterVolumeSpecName: "kube-api-access-tdx5v") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "kube-api-access-tdx5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"930189ee498333983e08c7ab2e58382299db3fb83cb58d6430015969c8cef074"} Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359738 4727 scope.go:117] "RemoveContainer" containerID="33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359983 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.407413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.425485 4727 scope.go:117] "RemoveContainer" containerID="5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.433959 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.433998 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.434016 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.455838 4727 scope.go:117] "RemoveContainer" containerID="bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.697215 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.705313 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.873597 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" path="/var/lib/kubelet/pods/ef9e8739-e51d-4fa8-9970-ce63af133d20/volumes" Jan 09 11:54:39 crc kubenswrapper[4727]: I0109 11:54:39.405857 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:54:39 crc kubenswrapper[4727]: I0109 11:54:39.406776 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:09 crc kubenswrapper[4727]: I0109 11:55:09.404688 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:55:09 crc kubenswrapper[4727]: I0109 11:55:09.405580 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405046 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405812 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.406868 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.406927 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" gracePeriod=600 Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.253658 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" exitCode=0 Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254757 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.453524 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455018 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-utilities" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455037 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-utilities" Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455080 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-content" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455089 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-content" Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455104 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455113 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455345 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.459653 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.462851 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z2dx8"/"default-dockercfg-sx8dq" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.463269 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2dx8"/"openshift-service-ca.crt" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.468304 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2dx8"/"kube-root-ca.crt" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.480584 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.511920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.512293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.614904 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.615041 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.615693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.038272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.086584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.536913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:29 crc kubenswrapper[4727]: W0109 11:56:29.542311 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6406f2a3_a4e6_4379_a2a6_adcc1eb952fa.slice/crio-a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1 WatchSource:0}: Error finding container a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1: Status 404 returned error can't find the container with id a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1 Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.751055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.763955 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.764469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.794406 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" podStartSLOduration=2.794375793 podStartE2EDuration="2.794375793s" podCreationTimestamp="2026-01-09 11:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:56:30.787866977 +0000 UTC m=+4236.237771768" watchObservedRunningTime="2026-01-09 11:56:30.794375793 +0000 UTC m=+4236.244280574" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.643110 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.645693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.822267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.822809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924814 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.953473 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.981170 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.804400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerStarted","Data":"8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431"} Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.805407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerStarted","Data":"017e73f51997a03f76ba5c753ba49e6ae59e3b16cc9dee47e3993b90f6c775c3"} Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.824103 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" podStartSLOduration=1.8240797180000001 podStartE2EDuration="1.824079718s" podCreationTimestamp="2026-01-09 11:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:56:34.820774199 +0000 UTC m=+4240.270678990" watchObservedRunningTime="2026-01-09 11:56:34.824079718 +0000 UTC m=+4240.273984499" Jan 09 11:57:14 crc kubenswrapper[4727]: I0109 11:57:14.216990 4727 generic.go:334] "Generic (PLEG): container finished" podID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerID="8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431" exitCode=0 Jan 09 11:57:14 crc kubenswrapper[4727]: I0109 11:57:14.217064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerDied","Data":"8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431"} Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.361226 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.400457 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.414587 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476259 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"22abbe2c-763b-4058-8efb-ad09eb687bc9\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476353 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"22abbe2c-763b-4058-8efb-ad09eb687bc9\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476486 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host" (OuterVolumeSpecName: "host") pod "22abbe2c-763b-4058-8efb-ad09eb687bc9" (UID: "22abbe2c-763b-4058-8efb-ad09eb687bc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.477193 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.843004 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944" (OuterVolumeSpecName: "kube-api-access-vz944") pod "22abbe2c-763b-4058-8efb-ad09eb687bc9" (UID: "22abbe2c-763b-4058-8efb-ad09eb687bc9"). InnerVolumeSpecName "kube-api-access-vz944". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.887885 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.246027 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017e73f51997a03f76ba5c753ba49e6ae59e3b16cc9dee47e3993b90f6c775c3" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.246078 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.873452 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" path="/var/lib/kubelet/pods/22abbe2c-763b-4058-8efb-ad09eb687bc9/volumes" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.334535 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:17 crc kubenswrapper[4727]: E0109 11:57:17.335844 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.335868 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.336115 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.337175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.423309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.423440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525959 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525966 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.551543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.660386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:18 crc kubenswrapper[4727]: I0109 11:57:18.265444 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" event={"ID":"3cdf248f-c28f-4031-92b0-4945708a36d5","Type":"ContainerStarted","Data":"78754fc872d09ed1a4f5c1e91dae695e35b76032e061eccc50bff1fffd35123a"} Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.276108 4727 generic.go:334] "Generic (PLEG): container finished" podID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerID="7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351" exitCode=0 Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.276258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" event={"ID":"3cdf248f-c28f-4031-92b0-4945708a36d5","Type":"ContainerDied","Data":"7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351"} Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.768618 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.778203 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.398382 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495235 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"3cdf248f-c28f-4031-92b0-4945708a36d5\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"3cdf248f-c28f-4031-92b0-4945708a36d5\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495460 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host" (OuterVolumeSpecName: "host") pod "3cdf248f-c28f-4031-92b0-4945708a36d5" (UID: "3cdf248f-c28f-4031-92b0-4945708a36d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.496009 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.502831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2" (OuterVolumeSpecName: "kube-api-access-cwmv2") pod "3cdf248f-c28f-4031-92b0-4945708a36d5" (UID: "3cdf248f-c28f-4031-92b0-4945708a36d5"). InnerVolumeSpecName "kube-api-access-cwmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.597216 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.873160 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" path="/var/lib/kubelet/pods/3cdf248f-c28f-4031-92b0-4945708a36d5/volumes" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.961256 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:20 crc kubenswrapper[4727]: E0109 11:57:20.962048 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.962080 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.962272 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.963097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.004841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.004955 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.126031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.282694 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.295909 4727 scope.go:117] "RemoveContainer" containerID="7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.296068 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:21 crc kubenswrapper[4727]: W0109 11:57:21.319594 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aa84c32_e586_44e1_bf65_2eca20015743.slice/crio-dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48 WatchSource:0}: Error finding container dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48: Status 404 returned error can't find the container with id dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48 Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.308892 4727 generic.go:334] "Generic (PLEG): container finished" podID="8aa84c32-e586-44e1-bf65-2eca20015743" containerID="09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac" exitCode=0 Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.308992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" event={"ID":"8aa84c32-e586-44e1-bf65-2eca20015743","Type":"ContainerDied","Data":"09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac"} Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.309775 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" event={"ID":"8aa84c32-e586-44e1-bf65-2eca20015743","Type":"ContainerStarted","Data":"dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48"} Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.362113 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.373407 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.421705 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.554754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"8aa84c32-e586-44e1-bf65-2eca20015743\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.555178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"8aa84c32-e586-44e1-bf65-2eca20015743\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.554903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host" (OuterVolumeSpecName: "host") pod "8aa84c32-e586-44e1-bf65-2eca20015743" (UID: "8aa84c32-e586-44e1-bf65-2eca20015743"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.556317 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.564491 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw" (OuterVolumeSpecName: "kube-api-access-btjtw") pod "8aa84c32-e586-44e1-bf65-2eca20015743" (UID: "8aa84c32-e586-44e1-bf65-2eca20015743"). InnerVolumeSpecName "kube-api-access-btjtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.658672 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.330711 4727 scope.go:117] "RemoveContainer" containerID="09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.331209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.881250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" path="/var/lib/kubelet/pods/8aa84c32-e586-44e1-bf65-2eca20015743/volumes" Jan 09 11:57:39 crc kubenswrapper[4727]: I0109 11:57:39.404789 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:57:39 crc kubenswrapper[4727]: I0109 11:57:39.405902 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.681794 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.885360 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api-log/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.888745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.922349 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener-log/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.628475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.642956 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker-log/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.871267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc_23e25abc-b16a-4273-846e-7fab7ef1a095/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.956615 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-central-agent/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.014798 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-notification-agent/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.135558 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/proxy-httpd/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.135844 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/sg-core/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.279483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.470812 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api-log/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.590663 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/cinder-scheduler/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.636250 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/probe/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.750891 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x2djn_f1169cca-13ce-4a18-8901-faa73fc5b913/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.920207 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l88s_fc6114d6-7052-46b3-a8e5-c8b9731cc92c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.238808 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.416594 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.492926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/dnsmasq-dns/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.531495 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz_79cfc519-9725-4957-b42c-d262651895a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.719903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-httpd/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.770620 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-log/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.913342 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-httpd/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.948883 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-log/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.167147 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.329530 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qplw9_a4f9d22c-83b0-4c0c-95e3-a2b2937908db/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.590887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qs4rr_e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.619926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon-log/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.812388 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-666857844b-c2hp6_3738e7aa-d182-43a0-962c-b735526851f2/keystone-api/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.865896 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3/kube-state-metrics/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.911748 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zs24v_a56270d2-f80b-4dda-a64c-fe39d4b4a9e5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.222063 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-httpd/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.357592 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-api/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.425353 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82_92bbfcf1-befd-42df-a532-97f9a3bd22d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.121502 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-log/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.169670 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3aab78e7-6f64-4c9e-bb37-f670092f06eb/nova-cell0-conductor-conductor/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.479747 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a601271-3d79-4446-bc6f-81b4490541f4/nova-cell1-conductor-conductor/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.558746 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-api/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.627449 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7275705c-d408-4eb4-af28-b9b51403b913/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.951713 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s9spc_291b6783-3c71-4449-b696-27c7c340c41a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.104205 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-log/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.582918 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.752004 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1203f055-468b-48e1-b859-78a4d11d5034/nova-scheduler-scheduler/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.863733 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/galera/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.885390 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.153545 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.372240 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.398719 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/galera/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.583854 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06c8d5e8-c424-4b08-98a2-8e89fa5a27b4/openstackclient/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.705942 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p58fw_ede60be2-7d1e-482a-b994-6c552d322575/openstack-network-exporter/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.930623 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwrp2_d81594ff-04f5-47c2-9620-db583609e9aa/ovn-controller/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.126415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-metadata/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.138475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.412917 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovs-vswitchd/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.445785 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.449903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.685651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/openstack-network-exporter/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.734144 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rhzcm_5ebde73e-573e-4b52-b779-dd3cd03761e0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.758864 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/ovn-northd/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.992247 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/openstack-network-exporter/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.074501 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/ovsdbserver-nb/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.258821 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/openstack-network-exporter/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.339994 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/ovsdbserver-sb/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.017691 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.051536 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-api/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.147985 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-log/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.343498 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/rabbitmq/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.348748 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.408135 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.745583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.788706 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd_72a53995-d5d0-4795-a1c7-f8a570a0ff6a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.797929 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/rabbitmq/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.003300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4zggm_ce764242-0f23-4580-87ee-9f0f2f81fb0e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.153078 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv_d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.327117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-27qwg_6f717d58-9e42-4359-89e8-70a60345d546/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.440394 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9n6wb_247ff33e-a764-4e75-9d54-2c45ae8d8ca7/ssh-known-hosts-edpm-deployment/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.793169 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-httpd/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.186016 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.288524 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-t2qwp_5a7df215-53c5-4771-95de-9af59255b3de/swift-ring-rebalance/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.450281 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.557152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-reaper/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.576041 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-replicator/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.616309 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.671414 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.835024 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.852579 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-updater/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.903857 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.939616 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-replicator/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.090074 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-replicator/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.090390 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-expirer/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.229117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-server/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.397686 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-updater/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.475411 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/rsync/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.506132 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/swift-recon-cron/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.709952 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5_2d4033a7-e7a4-495b-bbb9-63e8ae1189bc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.750609 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e/tempest-tests-tempest-tests-runner/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.946937 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_65b47f8e-eab5-4015-9926-36dcf8a8a1f0/test-operator-logs-container/0.log" Jan 09 11:58:04 crc kubenswrapper[4727]: I0109 11:58:04.086472 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m4njz_6811cbf2-94eb-44a0-ae3e-8f0e35163df5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:09 crc kubenswrapper[4727]: I0109 11:58:09.404562 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:58:09 crc kubenswrapper[4727]: I0109 11:58:09.405423 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:58:13 crc kubenswrapper[4727]: I0109 11:58:13.506968 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0e6e8606-58f3-4640-939b-afa25ce1ce03/memcached/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.309102 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-nd7lx_f57a8b19-1f94-4cc4-af28-f7c506f93de5/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.457903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-l25ck_63639485-2ddb-4983-921a-9de5dda98f0f/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.572723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-l4fld_e8c91cda-4264-401f-83de-20ddcf5f0d4d/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.652723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.828976 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.885259 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.900553 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.015242 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.039886 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.070602 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/extract/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.234819 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-s49vr_9891b17e-81f9-4999-b489-db3e162c2a54/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.321726 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-w5c7d_9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.467358 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-nxc7n_51db22df-3d25-4c12-b104-eb3848940958/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.673784 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g5ckd_e4480343-1920-4926-8668-e47e5bbfb646/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.773094 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-qpmcd_24886819-7c1f-4b1f-880e-4b2102e302c1/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.896814 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-4nzmw_6040cced-684e-4521-9c4e-1debba9d5320/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.591470 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-6gtz5_ddfee9e4-1084-4750-ab19-473dde7a2fb6/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.671152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-4dv6h_e604d4a1-bf95-49df-a854-b15337b7fae7/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.881265 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-q8wx7_848b9588-10d2-4bd4-bcc0-cccd55334c85/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.971097 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-69kx5_9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.086113 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-pnk72_fab7e320-c116-4603-9aac-2e310be1b209/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.172365 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh_3550e1cd-642e-481c-b98f-b6d3770f51ca/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.564780 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-75c59d454f-d829c_f749f148-ae4b-475b-90d9-1028d134d57c/operator/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.642293 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cj5kr_26bfbd30-40a2-466a-862d-6cdf25911f85/registry-server/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.900910 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gkkm4_558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.988697 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-cc8k9_15c1d49b-c086-4c30-9a99-e0fb597dd76f/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.206219 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2m6mz_ee5399a2-4352-4013-9c26-a40e4bc815e3/operator/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.622240 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7db9fd4464-5h9ft_6a33b307-e521-43c4-8e35-3e9d7d553716/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.808389 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-x4r9z_c371fa9c-dd02-4673-99aa-4ec8fa8d9e07/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.849059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-vgcgj_ba0be6cc-1e31-4421-aa33-1e2514069376/manager/0.log" Jan 09 11:58:38 crc kubenswrapper[4727]: I0109 11:58:38.000143 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-m8s9d_e3f94965-fce3-4e35-9f97-5047e05dd50a/manager/0.log" Jan 09 11:58:38 crc kubenswrapper[4727]: I0109 11:58:38.037076 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-jvkn5_9300f2a9-97a8-4868-9485-8dd5d51df39e/manager/0.log" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.405392 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.405936 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.406004 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.406996 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.407070 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" gracePeriod=600 Jan 09 11:58:39 crc kubenswrapper[4727]: E0109 11:58:39.607639 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.167466 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" exitCode=0 Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.167579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.168134 4727 scope.go:117] "RemoveContainer" containerID="cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.169113 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:58:40 crc kubenswrapper[4727]: E0109 11:58:40.169419 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:51 crc kubenswrapper[4727]: I0109 11:58:51.860575 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:58:51 crc kubenswrapper[4727]: E0109 11:58:51.861578 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.532017 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6pvx_879d1222-addb-406a-b8fd-3ce4068c1d08/control-plane-machine-set-operator/0.log" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.732195 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/kube-rbac-proxy/0.log" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.736433 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/machine-api-operator/0.log" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.315597 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:02 crc kubenswrapper[4727]: E0109 11:59:02.316758 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.316775 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.317022 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.319275 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.331725 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.459919 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.460514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.460592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.562914 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.564067 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.585715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.644608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:03 crc kubenswrapper[4727]: I0109 11:59:03.259170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:03 crc kubenswrapper[4727]: I0109 11:59:03.860599 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:03 crc kubenswrapper[4727]: E0109 11:59:03.862663 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.427837 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" exitCode=0 Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.427976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95"} Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.428321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"11487aafdd4e5c000ef83e33c4fcf09588b392155e2980f91294cde1216b9bbc"} Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.430933 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:59:05 crc kubenswrapper[4727]: I0109 11:59:05.443275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} Jan 09 11:59:06 crc kubenswrapper[4727]: I0109 11:59:06.459094 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" exitCode=0 Jan 09 11:59:06 crc kubenswrapper[4727]: I0109 11:59:06.459175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} Jan 09 11:59:07 crc kubenswrapper[4727]: I0109 11:59:07.490559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} Jan 09 11:59:07 crc kubenswrapper[4727]: I0109 11:59:07.516869 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgctn" podStartSLOduration=3.067996176 podStartE2EDuration="5.516849011s" podCreationTimestamp="2026-01-09 11:59:02 +0000 UTC" firstStartedPulling="2026-01-09 11:59:04.430640518 +0000 UTC m=+4389.880545299" lastFinishedPulling="2026-01-09 11:59:06.879493353 +0000 UTC m=+4392.329398134" observedRunningTime="2026-01-09 11:59:07.51168168 +0000 UTC m=+4392.961586471" watchObservedRunningTime="2026-01-09 11:59:07.516849011 +0000 UTC m=+4392.966753792" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.645055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.646920 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.704769 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:13 crc kubenswrapper[4727]: I0109 11:59:13.611559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:13 crc kubenswrapper[4727]: I0109 11:59:13.679092 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.011669 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2qqks_2715d39f-d488-448b-b6f2-ff592dea195a/cert-manager-controller/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.217720 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cbsgr_3a45eda8-4151-4b6c-b0f2-ab6416dc34e9/cert-manager-cainjector/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.281869 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlfjg_5cee0bf6-27dd-4944-bbef-574afbae1542/cert-manager-webhook/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.577622 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgctn" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" containerID="cri-o://01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" gracePeriod=2 Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.328297 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404098 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.405736 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities" (OuterVolumeSpecName: "utilities") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.411129 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8" (OuterVolumeSpecName: "kube-api-access-nkrd8") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "kube-api-access-nkrd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.465208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506843 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506889 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506903 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590470 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" exitCode=0 Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"11487aafdd4e5c000ef83e33c4fcf09588b392155e2980f91294cde1216b9bbc"} Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590598 4727 scope.go:117] "RemoveContainer" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590757 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.632751 4727 scope.go:117] "RemoveContainer" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.647650 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.659669 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.663719 4727 scope.go:117] "RemoveContainer" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.700405 4727 scope.go:117] "RemoveContainer" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.702942 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": container with ID starting with 01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad not found: ID does not exist" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.703010 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} err="failed to get container status \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": rpc error: code = NotFound desc = could not find container \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": container with ID starting with 01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.703047 4727 scope.go:117] "RemoveContainer" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.704502 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": container with ID starting with 7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b not found: ID does not exist" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704589 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} err="failed to get container status \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": rpc error: code = NotFound desc = could not find container \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": container with ID starting with 7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704622 4727 scope.go:117] "RemoveContainer" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.704953 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": container with ID starting with f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95 not found: ID does not exist" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704968 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95"} err="failed to get container status \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": rpc error: code = NotFound desc = could not find container \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": container with ID starting with f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95 not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.896623 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.896988 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.912854 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" path="/var/lib/kubelet/pods/b5aa136f-618b-42f3-b1ad-97199b0fb4f7/volumes" Jan 09 11:59:28 crc kubenswrapper[4727]: I0109 11:59:28.861210 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:28 crc kubenswrapper[4727]: E0109 11:59:28.862137 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.173480 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-6dwzn_9721a7da-2c8a-4a0d-ac56-8b4b11c028cd/nmstate-console-plugin/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.387224 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4757d_673fefde-8c1b-46fe-a88a-00b3fa962a3e/nmstate-handler/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.485221 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/kube-rbac-proxy/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.527458 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/nmstate-metrics/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.701279 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p86wv_b4c7550e-1eaa-4e85-b44d-c752f6e37955/nmstate-operator/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.760138 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-5lc88_7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac/nmstate-webhook/0.log" Jan 09 11:59:39 crc kubenswrapper[4727]: I0109 11:59:39.860552 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:39 crc kubenswrapper[4727]: E0109 11:59:39.861649 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:53 crc kubenswrapper[4727]: I0109 11:59:53.860970 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:53 crc kubenswrapper[4727]: E0109 11:59:53.862323 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:56 crc kubenswrapper[4727]: I0109 11:59:56.257009 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-67d6487995-f424z" podUID="f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.202580 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.203972 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.203986 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.203998 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-utilities" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204004 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-utilities" Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.204032 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-content" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204038 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-content" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204304 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.205080 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.208128 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.208417 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.213270 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.332641 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.333135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.333224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435502 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.436624 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.454033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.458916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.526116 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:01 crc kubenswrapper[4727]: I0109 12:00:01.026799 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.023655 4727 generic.go:334] "Generic (PLEG): container finished" podID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerID="7a8da01c55d77faff9fdb244545f165bdc12ccfae94df90194b9a9fbeed83e23" exitCode=0 Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.023733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerDied","Data":"7a8da01c55d77faff9fdb244545f165bdc12ccfae94df90194b9a9fbeed83e23"} Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.024082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerStarted","Data":"2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240"} Jan 09 12:00:03 crc kubenswrapper[4727]: I0109 12:00:03.428899 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/kube-rbac-proxy/0.log" Jan 09 12:00:03 crc kubenswrapper[4727]: I0109 12:00:03.577474 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/controller/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.036657 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerDied","Data":"2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240"} Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044216 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044214 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130052 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.131418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume" (OuterVolumeSpecName: "config-volume") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.140336 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-6msbv_ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee/frr-k8s-webhook-server/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.219563 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.233201 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.235126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.235330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc" (OuterVolumeSpecName: "kube-api-access-rj5tc") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "kube-api-access-rj5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.335436 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.335483 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.472840 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.472923 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.491445 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.508075 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.729679 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.734088 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.779081 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.798887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.867336 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:04 crc kubenswrapper[4727]: E0109 12:00:04.867807 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.975977 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.013080 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.054195 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.108827 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/controller/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.146951 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.170784 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.249750 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr-metrics/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.293711 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.339689 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy-frr/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.483256 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/reloader/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.592023 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fc8994bc9-qg228_d7eb33c1-26fc-47be-8c5b-f235afa77ea8/manager/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.863227 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c5db45976-lnrnz_d3f738e6-a0bc-42cd-b4d8-71940837e09f/webhook-server/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.047752 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/kube-rbac-proxy/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.639830 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/speaker/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.776047 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.872691 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" path="/var/lib/kubelet/pods/12b68a71-edf6-4fe6-8f5c-92b1424309c6/volumes" Jan 09 12:00:17 crc kubenswrapper[4727]: I0109 12:00:17.860796 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:17 crc kubenswrapper[4727]: E0109 12:00:17.863666 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:20 crc kubenswrapper[4727]: I0109 12:00:20.849039 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.208545 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.276703 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.313457 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.512217 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.532672 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.539542 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/extract/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.736327 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.852384 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.907080 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.918882 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.134365 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.142914 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/extract/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.154155 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.394274 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.591022 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.593788 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.630157 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.785106 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.834820 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.009482 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.017493 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/registry-server/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.185577 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.185606 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.243138 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.414738 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.433058 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.618130 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55prz_82b1f92b-6077-4b4c-876a-3d732a78b2cc/marketplace-operator/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.773430 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.003567 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/registry-server/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.025334 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.081296 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.117584 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.778158 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.805922 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.937926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/registry-server/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.032498 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.399119 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.463951 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.496377 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.655154 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.671030 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:26 crc kubenswrapper[4727]: I0109 12:00:26.325881 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/registry-server/0.log" Jan 09 12:00:28 crc kubenswrapper[4727]: I0109 12:00:28.860889 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:28 crc kubenswrapper[4727]: E0109 12:00:28.862129 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:41 crc kubenswrapper[4727]: I0109 12:00:41.166948 4727 scope.go:117] "RemoveContainer" containerID="84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709" Jan 09 12:00:41 crc kubenswrapper[4727]: I0109 12:00:41.861222 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:41 crc kubenswrapper[4727]: E0109 12:00:41.861872 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:56 crc kubenswrapper[4727]: I0109 12:00:56.861229 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:56 crc kubenswrapper[4727]: E0109 12:00:56.862409 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.038485 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:00:59 crc kubenswrapper[4727]: E0109 12:00:59.039554 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.039571 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.039883 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.041862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.055952 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.227115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.227346 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.257664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.388558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.036761 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.167356 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.169249 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.184278 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254126 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254583 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356975 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.357003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.639715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.640195 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.640967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.641849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.796225 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.409489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.632094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerStarted","Data":"49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3"} Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635335 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" exitCode=0 Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb"} Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"c5f7699f3f27e94e26b48ccbdf25b86dac98761208083e17a39c25c60ddb3ed1"} Jan 09 12:01:02 crc kubenswrapper[4727]: E0109 12:01:02.242627 4727 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.200:36970->38.102.83.200:46169: read tcp 38.102.83.200:36970->38.102.83.200:46169: read: connection reset by peer Jan 09 12:01:02 crc kubenswrapper[4727]: I0109 12:01:02.648671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerStarted","Data":"33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0"} Jan 09 12:01:02 crc kubenswrapper[4727]: I0109 12:01:02.680273 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29466001-jz589" podStartSLOduration=2.680245249 podStartE2EDuration="2.680245249s" podCreationTimestamp="2026-01-09 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 12:01:02.670208988 +0000 UTC m=+4508.120113769" watchObservedRunningTime="2026-01-09 12:01:02.680245249 +0000 UTC m=+4508.130150030" Jan 09 12:01:04 crc kubenswrapper[4727]: I0109 12:01:04.672675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.687115 4727 generic.go:334] "Generic (PLEG): container finished" podID="e3394060-4f97-480d-8271-7fb514f60bc0" containerID="33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0" exitCode=0 Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.687216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerDied","Data":"33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0"} Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.690103 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" exitCode=0 Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.690222 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.084883 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.147745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.147837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.148379 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.148470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.156114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89" (OuterVolumeSpecName: "kube-api-access-c4w89") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "kube-api-access-c4w89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.156701 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.181278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.231157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data" (OuterVolumeSpecName: "config-data") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251105 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251153 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251166 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251180 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerDied","Data":"49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3"} Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713825 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713873 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.861680 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:07 crc kubenswrapper[4727]: E0109 12:01:07.862844 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:09 crc kubenswrapper[4727]: I0109 12:01:09.739120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} Jan 09 12:01:09 crc kubenswrapper[4727]: I0109 12:01:09.764627 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4jlnh" podStartSLOduration=3.701531347 podStartE2EDuration="10.764599762s" podCreationTimestamp="2026-01-09 12:00:59 +0000 UTC" firstStartedPulling="2026-01-09 12:01:01.637951812 +0000 UTC m=+4507.087856593" lastFinishedPulling="2026-01-09 12:01:08.701020227 +0000 UTC m=+4514.150925008" observedRunningTime="2026-01-09 12:01:09.759085973 +0000 UTC m=+4515.208990764" watchObservedRunningTime="2026-01-09 12:01:09.764599762 +0000 UTC m=+4515.214504553" Jan 09 12:01:14 crc kubenswrapper[4727]: I0109 12:01:14.628423 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" podUID="414cbbdd-31b2-4eae-84a7-33cd1a4961b5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.389170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.391264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.453197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.895259 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.961060 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:20 crc kubenswrapper[4727]: I0109 12:01:20.861401 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:20 crc kubenswrapper[4727]: E0109 12:01:20.861791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:21 crc kubenswrapper[4727]: I0109 12:01:21.863112 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4jlnh" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" containerID="cri-o://03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" gracePeriod=2 Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.430928 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540320 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.541410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities" (OuterVolumeSpecName: "utilities") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.546954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl" (OuterVolumeSpecName: "kube-api-access-l2dsl") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "kube-api-access-l2dsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.643385 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.643465 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.689690 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.745958 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876239 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" exitCode=0 Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876602 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"c5f7699f3f27e94e26b48ccbdf25b86dac98761208083e17a39c25c60ddb3ed1"} Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876626 4727 scope.go:117] "RemoveContainer" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876796 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.906781 4727 scope.go:117] "RemoveContainer" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.937330 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.944676 4727 scope.go:117] "RemoveContainer" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.948967 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.988693 4727 scope.go:117] "RemoveContainer" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.989260 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": container with ID starting with 03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5 not found: ID does not exist" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.989313 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} err="failed to get container status \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": rpc error: code = NotFound desc = could not find container \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": container with ID starting with 03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5 not found: ID does not exist" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.989351 4727 scope.go:117] "RemoveContainer" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.990112 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": container with ID starting with 850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f not found: ID does not exist" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990191 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} err="failed to get container status \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": rpc error: code = NotFound desc = could not find container \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": container with ID starting with 850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f not found: ID does not exist" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990234 4727 scope.go:117] "RemoveContainer" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.990777 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": container with ID starting with 269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb not found: ID does not exist" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990820 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb"} err="failed to get container status \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": rpc error: code = NotFound desc = could not find container \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": container with ID starting with 269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb not found: ID does not exist" Jan 09 12:01:24 crc kubenswrapper[4727]: I0109 12:01:24.878693 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" path="/var/lib/kubelet/pods/cfd08a1b-1ead-450f-b0e6-ea316b43a425/volumes" Jan 09 12:01:32 crc kubenswrapper[4727]: I0109 12:01:32.861749 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:32 crc kubenswrapper[4727]: E0109 12:01:32.862576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:44 crc kubenswrapper[4727]: I0109 12:01:44.861606 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:44 crc kubenswrapper[4727]: E0109 12:01:44.862733 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:57 crc kubenswrapper[4727]: I0109 12:01:57.860636 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:57 crc kubenswrapper[4727]: E0109 12:01:57.861892 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.507156 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.508753 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-content" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509593 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-content" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509622 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509628 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509653 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509659 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509692 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-utilities" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509698 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-utilities" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509891 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509901 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.511717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.528499 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.638908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.638982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.639047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.742304 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.836984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.843193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:08 crc kubenswrapper[4727]: I0109 12:02:08.404501 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361493 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" exitCode=0 Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9"} Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerStarted","Data":"726bdc9b07fcbf4f96565bf46eb1c11bc1b653a21d6227964a5682ae96f79882"} Jan 09 12:02:10 crc kubenswrapper[4727]: I0109 12:02:10.860876 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:10 crc kubenswrapper[4727]: E0109 12:02:10.861647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:11 crc kubenswrapper[4727]: I0109 12:02:11.401498 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" exitCode=0 Jan 09 12:02:11 crc kubenswrapper[4727]: I0109 12:02:11.401748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf"} Jan 09 12:02:12 crc kubenswrapper[4727]: I0109 12:02:12.414659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerStarted","Data":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} Jan 09 12:02:12 crc kubenswrapper[4727]: I0109 12:02:12.441909 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vk2s7" podStartSLOduration=2.877529828 podStartE2EDuration="5.441883423s" podCreationTimestamp="2026-01-09 12:02:07 +0000 UTC" firstStartedPulling="2026-01-09 12:02:09.365135535 +0000 UTC m=+4574.815040316" lastFinishedPulling="2026-01-09 12:02:11.92948913 +0000 UTC m=+4577.379393911" observedRunningTime="2026-01-09 12:02:12.431889061 +0000 UTC m=+4577.881793862" watchObservedRunningTime="2026-01-09 12:02:12.441883423 +0000 UTC m=+4577.891788194" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.843394 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.844250 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.891165 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:18 crc kubenswrapper[4727]: I0109 12:02:18.514410 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:18 crc kubenswrapper[4727]: I0109 12:02:18.600668 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.484659 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vk2s7" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" containerID="cri-o://b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" gracePeriod=2 Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.925261 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935884 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935933 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935967 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.939079 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities" (OuterVolumeSpecName: "utilities") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.969106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.981353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq" (OuterVolumeSpecName: "kube-api-access-2nwwq") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "kube-api-access-2nwwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038418 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038460 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038474 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498228 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" exitCode=0 Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498325 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"726bdc9b07fcbf4f96565bf46eb1c11bc1b653a21d6227964a5682ae96f79882"} Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498374 4727 scope.go:117] "RemoveContainer" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.537038 4727 scope.go:117] "RemoveContainer" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.545948 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.558930 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.581144 4727 scope.go:117] "RemoveContainer" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615226 4727 scope.go:117] "RemoveContainer" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.615843 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": container with ID starting with b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac not found: ID does not exist" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615876 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} err="failed to get container status \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": rpc error: code = NotFound desc = could not find container \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": container with ID starting with b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac not found: ID does not exist" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615902 4727 scope.go:117] "RemoveContainer" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.616123 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": container with ID starting with fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf not found: ID does not exist" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616154 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf"} err="failed to get container status \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": rpc error: code = NotFound desc = could not find container \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": container with ID starting with fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf not found: ID does not exist" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616179 4727 scope.go:117] "RemoveContainer" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.616903 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": container with ID starting with 2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9 not found: ID does not exist" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616927 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9"} err="failed to get container status \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": rpc error: code = NotFound desc = could not find container \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": container with ID starting with 2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9 not found: ID does not exist" Jan 09 12:02:22 crc kubenswrapper[4727]: I0109 12:02:22.874301 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" path="/var/lib/kubelet/pods/c6d41a8d-df27-42f8-8d05-c763c454fafd/volumes" Jan 09 12:02:23 crc kubenswrapper[4727]: I0109 12:02:23.861378 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:23 crc kubenswrapper[4727]: E0109 12:02:23.861976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.570500 4727 generic.go:334] "Generic (PLEG): container finished" podID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" exitCode=0 Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.570561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerDied","Data":"97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4"} Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.571853 4727 scope.go:117] "RemoveContainer" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" Jan 09 12:02:29 crc kubenswrapper[4727]: I0109 12:02:29.470175 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2dx8_must-gather-pnnsk_6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/gather/0.log" Jan 09 12:02:35 crc kubenswrapper[4727]: I0109 12:02:35.861632 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:35 crc kubenswrapper[4727]: E0109 12:02:35.862802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.691627 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.692718 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" containerID="cri-o://ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf" gracePeriod=2 Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.704026 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.223460 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2dx8_must-gather-pnnsk_6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/copy/0.log" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.224551 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.287010 4727 scope.go:117] "RemoveContainer" containerID="8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.326843 4727 scope.go:117] "RemoveContainer" containerID="ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.362171 4727 scope.go:117] "RemoveContainer" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.384797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.384912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.396818 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv" (OuterVolumeSpecName: "kube-api-access-hzlhv") pod "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" (UID: "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa"). InnerVolumeSpecName "kube-api-access-hzlhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.487891 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.574046 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" (UID: "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.589811 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.699593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 12:02:42 crc kubenswrapper[4727]: I0109 12:02:42.881458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" path="/var/lib/kubelet/pods/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/volumes" Jan 09 12:02:48 crc kubenswrapper[4727]: I0109 12:02:48.860591 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:48 crc kubenswrapper[4727]: E0109 12:02:48.861692 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:03 crc kubenswrapper[4727]: I0109 12:03:03.860733 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:03 crc kubenswrapper[4727]: E0109 12:03:03.861922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:14 crc kubenswrapper[4727]: I0109 12:03:14.867388 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:14 crc kubenswrapper[4727]: E0109 12:03:14.868694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:29 crc kubenswrapper[4727]: I0109 12:03:29.861456 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:29 crc kubenswrapper[4727]: E0109 12:03:29.862644 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:43 crc kubenswrapper[4727]: I0109 12:03:43.861267 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:44 crc kubenswrapper[4727]: I0109 12:03:44.317482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0"} Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.123305 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126300 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-utilities" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126330 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-utilities" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126351 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126364 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126388 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126402 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-content" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-content" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126425 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126433 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126675 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126706 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126722 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.128587 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.142917 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.267430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.267545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.291699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.452339 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.017732 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:16 crc kubenswrapper[4727]: W0109 12:04:16.025089 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303bd47d_8182_4a89_bd15_9ca2b7d6101d.slice/crio-26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500 WatchSource:0}: Error finding container 26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500: Status 404 returned error can't find the container with id 26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500 Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667012 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" exitCode=0 Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91"} Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerStarted","Data":"26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500"} Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.670723 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 12:04:18 crc kubenswrapper[4727]: I0109 12:04:18.687964 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" exitCode=0 Jan 09 12:04:18 crc kubenswrapper[4727]: I0109 12:04:18.688059 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12"} Jan 09 12:04:19 crc kubenswrapper[4727]: I0109 12:04:19.703159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerStarted","Data":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} Jan 09 12:04:19 crc kubenswrapper[4727]: I0109 12:04:19.739077 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zkn2p" podStartSLOduration=2.102521563 podStartE2EDuration="4.739054385s" podCreationTimestamp="2026-01-09 12:04:15 +0000 UTC" firstStartedPulling="2026-01-09 12:04:16.670227975 +0000 UTC m=+4702.120132756" lastFinishedPulling="2026-01-09 12:04:19.306760797 +0000 UTC m=+4704.756665578" observedRunningTime="2026-01-09 12:04:19.727418899 +0000 UTC m=+4705.177323680" watchObservedRunningTime="2026-01-09 12:04:19.739054385 +0000 UTC m=+4705.188959166" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.452539 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.453306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.513837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.815562 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.882091 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:27 crc kubenswrapper[4727]: I0109 12:04:27.790130 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zkn2p" podUID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerName="registry-server" containerID="cri-o://6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" gracePeriod=2 Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.220232 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293495 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.298447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities" (OuterVolumeSpecName: "utilities") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.396174 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.467204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.498369 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803577 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" exitCode=0 Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500"} Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803711 4727 scope.go:117] "RemoveContainer" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803711 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.830655 4727 scope.go:117] "RemoveContainer" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.835913 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw" (OuterVolumeSpecName: "kube-api-access-6k8dw") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "kube-api-access-6k8dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.905606 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.090739 4727 scope.go:117] "RemoveContainer" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.152546 4727 scope.go:117] "RemoveContainer" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.154683 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": container with ID starting with 6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3 not found: ID does not exist" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.154738 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} err="failed to get container status \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": rpc error: code = NotFound desc = could not find container \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": container with ID starting with 6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.154771 4727 scope.go:117] "RemoveContainer" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.155377 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": container with ID starting with 324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12 not found: ID does not exist" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155413 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12"} err="failed to get container status \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": rpc error: code = NotFound desc = could not find container \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": container with ID starting with 324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155437 4727 scope.go:117] "RemoveContainer" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.155951 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": container with ID starting with ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91 not found: ID does not exist" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155983 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91"} err="failed to get container status \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": rpc error: code = NotFound desc = could not find container \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": container with ID starting with ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.206434 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.216584 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:30 crc kubenswrapper[4727]: I0109 12:04:30.873126 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" path="/var/lib/kubelet/pods/303bd47d-8182-4a89-bd15-9ca2b7d6101d/volumes" Jan 09 12:04:59 crc kubenswrapper[4727]: I0109 12:04:59.084897 4727 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod303bd47d-8182-4a89-bd15-9ca2b7d6101d"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod303bd47d-8182-4a89-bd15-9ca2b7d6101d] : Timed out while waiting for systemd to remove kubepods-burstable-pod303bd47d_8182_4a89_bd15_9ca2b7d6101d.slice"